• Venky Ganti

Making Peace with Dev Clusters

When microservices developers make changes to a microservice, the development environment needs to support testing and debugging the changes as efficiently as possible. Developer needs include:

  • Testing and debugging changes: The change could involve refactoring existing APIs with no changes to external API interactions. Or it could involve parallel development of APIs with dependencies such as contracts and parameter values.

  • Creating service tests for CI pipelines where each service can be tested in isolation.

Approaches that are Evolving to Address Above Issues

A few common approaches have been evolving and are being adopted by teams working on microservices applications. These are:

  • Shared development clusters to debug and test changes

  • Dedicated namespaces for each developer

  • API mocking

We discuss the advantages and disadvantages of each of these approaches below.

Dev Clusters

A common approach is to rely on a dev cluster where a version of the entire application is deployed. When a developer needs to debug a change, they push a version of the service to the dev cluster, test their changes for bugs, and iterate until they are done. There are two different types of dev clusters.

Shared dev clusters: Multiple developers share the same dev cluster for all their debugging and testing needs. Each developer pushes the service under development to the dev cluster to debug it. If a developer needs to use a service where the current version of the service in the dev cluster is buggy, there are no ways around the problem. The typical solution is to work on other things until the upstream service is debugged, and a working version is deployed.


  • Developers can use current versions of other services.

Pain points

  • Devs cannot use their IDE because their service is running remotely in a dev cluster. (Solutions such as telepresence.io and VSCode Bridge to Kubernetes are emerging to enable running the service locally in the IDE during debugging. These solutions are discussed below.)

  • Services in a shared dev cluster tend to be under development and are often buggy. With multiple microservice APIs being updated concurrently, developers are often blocked by dependencies on buggy services in the dev cluster.

  • For changes requiring concurrent updates to multiple services, developers of downstream services are blocked until other upstream services are ready to serve the new contracts.

Dedicated dev cluster namespaces: Each developer or a small number of developers use a dedicated namespace in a cluster for their debugging needs. Sometimes, teams rely on “ephemeral” namespaces where each developer can have their own namespace. When they are done, the namespace is deleted.


  • Developers can always work with stable versions of other services.

Pain points

  • Devs cannot use their IDE.

  • Developers of downstream services are blocked until other upstream services are ready to serve the new contracts.

  • Dedicated namespaces for every developer can get fairly expensive depending on the size of the team and the application footprint.

Local-Remote Bridge

A recent “local-remote bridge” approach advocated by Telepresence, Azure Dev Spaces, VS Code Bridge to Kubernetes enables a microservice running locally to connect with other microservices running in a remote Kubernetes dev cluster. The following figure illustrates this approach. Multiple developers can use the same dev cluster simultaneously while developing and testing their own services locally.

With the local-remote bridge, developers are now able to use their IDE to develop and test their changes to a service, removing one of the pain points for developers.

API/Service Mocks

Some engineering teams rely on API mocks for use during development and during regression testing in CI. The approach of mocking external APIs has been popular in the context of distributed services for a long time (gmock, mockito, etc.). The API mock creation process typically consumes significant human effort.

With microservices the number of APIs explodes. Moreover, these APIs tend to evolve quite rapidly with the application. This creates the following issues for microservices development teams:

  • Developers need to invest a lot of effort to create and maintain API mocks.

  • Despite the effort, often a good fraction of the API mocks will always be stale and will not match the real API functionality.

Mesh Dynamics Data-Driven Continuous Learning Approach

We take a hybrid data-driven approach where we enable microservices developers to:

  • Develop and test changes locally in their IDE

  • Leverage auto-created API mocks for local development and testing.

  • Switch seamlessly between live services and auto-created API mocks for egress requests.

  • Manually customize API mocks. During concurrent development, API mocks can be manually customized for APIs that are not available yet.

To achieve this, Mesh Dynamics captures API traffic between microservices in trustworthy environments such as integration test environments, and continuously learns and updates API mocks from ongoing traffic in these environments. As a result, the API mocks are in lock-step with API functionality.

When a developer needs to test and debug changes to a microservice, Mesh Dynamics API Studio enables the developer to use the auto-created API mocks for all services her service depends on. Where necessary and available, she can also use a live service ('Recommender' in the figure above) alongside mocks for other APIs. Thus, Mesh Dynamics’ hybrid data-driven approach addresses all of the hurdles constraining microservices developers.


The following table summarizes the developer pain points addressed by each approach.