• Muthumani Nambi

Choice of Microservices Development Environment Impacts Velocity - Part 1

Distributed software applications using microservices-based architecture are becoming common in the application development space. While microservices-based architecture offers a lot of benefits, it also creates new challenges. In this post, I will explore how the choice of development environment impacts development velocity.

SaaS, scalability and agility are factors driving the adoption of microservices architecture. Along with the benefits, it brings in complexity in deployment, monitoring, debugging, testing, and development. Tools are evolving to address some of these challenges, particularly for deployment and monitoring areas: Kubernetes for deployment, Prometheus/Grafana for monitoring, OpenTracing for distributed tracing, ELK/EFK for centralized logging, etc.

How are tools evolving to support the development of microservices?

Before answering this question, let us understand the challenges in microservices development and debugging. In the early stages of microservices adoption when you have just a few services, the needs may not appear to be much different from a monolith. However by the time you get to 12-15 services, the needs diverge significantly.

As a developer, I work on only one or a couple of services at a time. When my service is deployed, requests come into my service and requests may go out from my service to other services. Before I check-in my changes from the development environment, how do I test my service? Below are several questions that should be considered.

  1. How can I drive requests which mimic the requests in the integration / staging / prod environments? Should I handcraft a variety of requests covering different use cases?

  2. What if there is an issue in my service discovered in integration/staging/prod? How can I send exactly the same request to the service running in an IDE?

  3. If the changes I made alters the response of an API that my service exposes, is there an easy way to compare the earlier response and current response?

  4. When my service makes calls to other services, how are those requests fulfilled in my development environment? What are the options available today?

  5. Should I run all services that my service depends on locally? including the full dependency chain?

  6. How should I run the dependent services? In my local Kubernetes cluster? or use docker-compose? or directly run the services natively? Do I have enough resources on my laptop to run all the services locally?

  7. How do I ensure that all the dependent services deployed are the right version?

  8. If the dependent services are running in the cloud (for example, using solutions like tilt, scaffold, kelda), would it be cost-prohibitive to have dedicated environments for each developer? If it is a shared environment, how do I ensure that the environment is actually usable for every developer when they need it?

  9. Should I use mocks for the dependent services? How do I create faithful mock responses that correspond to my test requests? Should I use tools like wiremock, VCR, Hoverfly? Can I invest the time it takes to keep the handcrafted mock responses up to date as services evolve?

  10. Do I need the ability to debug my code with breakpoints from my IDE as I change the code?

  11. Do I have a single tool that solves all the problems, or I need to use different tools? What options are available today?

Part 2 of this series discusses the different options that are available today.