Choice of Microservices Development Environment Impacts Velocity - Part 2
In Part 1 of this topic, we raised a number of questions to consider about the choice of development environment for microservices. Part 2 covers the different options and tools available today, and the pros and cons of the choices.
The options can be divided broadly into two categories based on how outgoing calls to the dependent services are fulfilled while developing the APIs:
Run all the dependent services locally/remotely in cloud
Mock all the dependent services
Let us look at the benefits and limitations of each approach.
RUN DEPENDENT SERVICES LOCALLY OR IN THE CLOUD
There are three main options for running the dependent services locally:
Run all dependent services natively on the development machine on different ports with unique ports reserved for each service.
If all the services are dockerized, use docker compose for all dependent services with reserved unique ports for each service.
Run a local Kubernetes cluster (ex. Minikube or Docker for Mac/Windows) and deploy all the services including the service under development.
Since the dependent services are real, the test cases need not be fixed and predefined. Possibilities are unbounded.
With options (1) and (2) above, the service under development can be run in an IDE. Changes can be tested immediately and all the useful IDE features like breakpoints, thread stacks etc. can be fully leveraged.
Dependencies create a challenge for fast parallel development. For example, if a new feature for service A requires a change to a dependent service B (for instance, a new API, or a change in response from an existing API), then testing service A is gated on the modified service B becoming available first.
Running all of the dependent services locally in the developer laptop can run into resource constraints if the number of dependent services exceed 4-5 services.
If you use a local Kubernetes cluster (e.g. MiniKube) then you lose the ability to leverage your IDE for debugging.
If you are using a local Kubenetes cluster, every change requires the docker image to be re-built and deployed -- a cumbersome process. Tools such as tilt, skaffold etc., are evolving to efficiently build and deploy each change to the cluster.
Running dependent services remotely in the cloud addresses the resource constraints associated with running services locally, but leads to following challenges:
Shared dev clusters: Most organizations create multiple shared namespaces, with one namespace for each team.
As different team members start pushing their changes to the same namespace, the behavior of the shared namespace often becomes unstable.
Developers cannot use their IDEs to debug their services when using shared dev clusters. Debugging needs to be done by adding print statements to track the code execution and identify problems -- which isn't nearly as efficient as debugging in an IDE. Note: Using an IDE with tools like Telepresence will not work with shared dev clusters. Telepresence will swap service to be debugged with proxy service, which impacts other developers.
Dedicated namespace for each developer: Dedicated namespaces for each developer avoids the instability problem above, and tools like Telepresence makes it possible to use an IDE with a dedicated namespace. However, dedicated namespaces for each developer gets expensive very quickly as the team size grows.
The cycle of repeatedly packaging and updating the dev cluster after every code change introduces delays in the development process.
MOCK DEPENDENT SERVICES
Another option to fulfill the outgoing requests from the service under development is to mock the dependent services. The mock servers can run locally or remotely in the cloud. Wiremock, Hoverfly, Postman etc. are some of the commonly used mock servers.
Debug your service in IDE: Run the service being developed in your IDE and mock all external services.
Mock services are very lightweight and do not consume much resources. This setup will not slow down your laptop even when run locally.
Developers need not wait for the dependent service API development to be complete to test their own API. The dependent service API responses can be mocked and development can proceed without being gated by the dependent service being ready.
Developers only need to create mocks for the first level dependent services, i.e. mocks for the egress requests from the service being developed are sufficient.
External services (those provided by vendors outside the organization) can also be mocked.
Simulating failure conditions and introducing response latency are quite easy with mock services.
However, most mock solutions currently available have the following limitations:
The biggest problem with current mocking tools is that the mocks end up being static, and lose relevance quickly. Updating the mocks require manual effort. The tools are not geared to automatically update the mocks based on the changes happening in the services. As dev teams keep improving velocity, the static nature of manually generated mock services become a hurdle.
Current mocking tools are limited in their ability to offer a variety of nuanced responses to similar requests. This capability is often needed in order to simulate and test the behavior of a service A, where variations in the responses from a dependent service B for the same request triggers different behaviors of the calling service A. Typical examples include applications where the response from the dependent service changes as a result of some externality – for example, stock price changes for the same stock ticker, airlines reservations where availability and prices are changing over time, specific corner cases that you are trying to test, etc.
How can I solve these challenges while developing my microservice? I will discuss solutions for these questions in Part 3 of this article.