Choice of Microservices Development Environment Impacts Velocity - Part 3
In Part 1 of this series, I highlighted the questions engineering teams should consider for their choice of microservices development environment. Part 2 covered different options currently being used, and the pros and cons of those options. This final part focuses on the need for a new approach to address the limitations of the current options.
Most of the current developer tools were designed for monoliths, and not designed to explicitly address the changing needs of microservices architectures -- the need for parallel development given the distributed nature of the services. Development processes are being adopted to fit the tools available rather than building processes that are optimized for developing microservices applications. As a result, in order to test and debug their services during the development process, developers end up spending significant amounts of time doing tasks that could be automated or possibly eliminated. A different and new approach is needed to substantially improve the microservices development process, and the rest of this article discusses one such approach.
Here are some of the capabilities that I believe are extremely valuable to microservices developers and would help eliminate a bunch of the friction in the current development processes.
Using IDE for development: It is impossible to beat the efficiency and effectiveness of using an IDE to validate functionality during development. While it may still be necessary to test the services in a dev cluster to identify load or concurrency related issues, the functionality testing can be accomplished locally.
Visibility into egress requests: In the monolith environment, you could fully follow the request execution in an IDE. However, for microservices, you need to know the egress requests from your service with their parameters and body etc., for each ingress request to your service.
Mocks that are updated continuously and automatically: Mocking egress requests eliminates the dependency on dev clusters or the need to run the upstream services locally on the laptop. Possibly more important, it removes the dependency on the upstream services being live in the development environment. Mocks that are continuously updated automatically as and when new changes to upstream services are delivered in the test environment eliminate the need for manual maintenance of mocks.
Easy visualization of changes at ingress and egress: As a developer makes a change to a service, it is important to identify how it changes the responses to ingress calls, as well as how the egress requests change. An easy visual indication of these changes makes developers' lives a lot simpler.
Example requests and traces: Developers should have sufficient examples (both the ingress request / response as well as the egress requests / responses) for the service they are developing. Whether the test requests are handcrafted or learned from some environment, a rich set of example requests and responses are needed to support the early testing needs.
Learning API behavior
Not requiring the upstream services live in a dev environment or running locally on the developer’s machine requires being able to mock the upstream services. Creating and maintaining mocks can be extremely resource intensive if done manually. An ideal system should continuously learn the behavior of services from different environments -- production, test and development, and leverage the learning to build intelligent, continuously updated virtual services. This enables mimicking the entire application environment locally using these virtual services. In addition, the system should also allow the ability to change the learned behavior or adding a new behavior to the virtual services manually. This capability is required to achieve parallel development of services when there are interdependencies.
But how do we learn the behavior of different services in a microservices world where not all services are exposed publicly? The ability to learn the behavior can either be part of the service itself or be a separate component that can be placed close to the service with appropriate access to that service. Adding this learning capability to existing services should be seamless -- either with no code change or with minimal change.
Learning the API behavior not only enables local development using virtual environments, but it also enables comparing the behavior across time or versions.
Mesh Dynamics API Studio
We are trying to address these issues with Mesh Dynamics API Studio, with the goal of enabling microservices developers debug their services in their IDE while mimicking the entire environment locally in their laptops with minimal human effort.
To fully leverage the capabilities of the API Studio, Mesh Dynamics listeners should be deployed in various environments like dev, testing and staging clusters. This enables Mesh Dynamics to learn the behavior of the services deployed in those environments.
In the example below, movieinfo is the service under development and reviews, ratings, details and restwrapjdbc are the upstream services used during the execution of the request to movieinfo. The tool captures the context of each request and shows the services involved in that context as a trace tree, both for the original environment (where the trace on the left was captured), as well as for the virtual test environment (the trace on the right). The virtual services are context aware and can mimic the real service’s behavior based on continuous learning. In addition, the behavior of the virtual services can be changed easily by the developer.
The behavior of the service under development is compared with the original behavior and the differences are highlighted in yellow.
A local microservices development environment based on this completely new approach can fill the gaps in the existing solutions and help developers leverage the best features of the various development environments available today.
Comparison with common microservices development environments
We would love to hear your thoughts and comments. To try out the Mesh Dynamics API Studio, please contact us at firstname.lastname@example.org.