Why restrict JUnit to unit testing? Why not test the full microservice? – Part 1
The industry is moving towards building software with different business capabilities developed, deployed, and scaled independently. Microservices architecture is making it possible.
But when it comes to testing microservices applications, the mindset is still biased towards the ancient approaches used for monoliths: test a specific business capability exposed by a service as part of end-to-end tests with the complete application. This contradicts the promise of microservices -- achieving high velocity via independent development and independent deployment of services.
What are my options?
In a monolith application, developers routinely take a couple of steps whenever they change code/functionality. First, developers run unit tests to ensure their changes haven’t broken existing functionality. They can modify/add unit tests by focusing on the behavior of the smallest units like a method/function, and mocking all other dependencies. Second, developers validate the functionality of their code changes as part of the overall workflow by running the monolith application in their laptops.
For microservices applications, unit testing can be similar to a monolith. The unit testing process does not need to change much whether it is a monolith or a microservice. However, the workflow for the second step -- validating functionality of the service in the context of the overall application, and the scope of the test, remains unclear. That is, how can we test the end-to-end workflow with the changed functionality? Some options to consider for the test configuration:
Deploy the entire application in the developer’s laptop.
Bring up “Cloud in a box” or “Full stack in a box” remotely in the cloud, and deploy the changes.
Deploy the service in a shared cluster and test end-to-end.
Write code to mimic different variants of ingress calls, write code to mock appropriate responses from the upstream services, and maintain them to keep them up to date.
None of the above options seems easy to me as a developer. Perhaps it’s too complex to do pre-submit independent service testing? Being able to develop the service independently is good enough? and I should stick with unit tests followed by monolith-style end-to-end testing in a cluster before submitting my code?
Hold on, is end-to-end testing really needed by service developers?
Let us rethink what a microservice is. It is a service that is loosely coupled, independently developed, independently deployed, and has a specific business capability.
Going by this definition, what should be the focus of my testing as a developer of my service? Do I really need to worry about the behavior of other services? Can I restrict my testing to focus only on the following?
Business functionality that my service is providing,
API contract and functionality that I expose to downstream services (the consumers of my service),
API contract and functionality that I have to consume from upstream services that my service depends on,
Interaction with databases and messaging systems.
The 2nd and 3rd items above are challenges unique to microservices.
The key question for every microservices developer is: Why should I bother about anything else other than the items above? During development, I only focus on the functionality of my service, and not the entire application. Why shouldn’t it be the same during testing? As long as I am able to test the above points, I am independently validating the service I am responsible for. After all parallel and independent development is the intent of microservices, isn’t it?
But, can a service be tested independently?
While a service can be developed and deployed independently, often it can’t achieve the required business functionality without depending on other services. So, unless those upstream services are accessible, the service can’t be tested independently. In turn, those upstream services themselves might also depend on other services, and the chain can go multiple levels deep depending on the complexity of the application.
Ideally, I don’t want to do either a monolith style end-to-end deployment in a cluster, or understand the upstream service chain and deploy the required chain of services in my laptop -- just to test my service. Neither approach allows me to exploit the full benefits promised by microservices architecture.
What are the feasible options?
As mentioned earlier, ideally service testing should only need to focus on testing ingress/egress communication with other services, and testing the business functionality implemented in the service.
Frameworks like Pact and Spring Cloud Contract have emerged to focus on testing the contract between services. They focus on verifying the API contracts -- not on validating the business functionality. While these tools are great for solving the problem of testing service contracts locally in the developer’s laptop without setting up end-to-end services, they do not provide the test validation confidence that we need.
Even to achieve the contract validation, developers must spend a lot of effort to set these up.
For a brown field project, introducing these checks involves significant effort as the contracts have to be hand-coded for all the existing APIs.
Developers need to have the discipline to update the contracts with every change.
These tools are focused only on verifying the contracts. Developers still need to verify the business functionality of the services.
Wiremock is another great tool addressing the problem of testing the business functionality for services. Developers can write code to mock the upstream services and do service testing locally using JUnit -- without the need to set up the upstream services. But, Wiremock also has its challenges:
The mock server behavior must be handcrafted, like request matching logic and building request/responses etc.
The mock behavior can become stale as upstream services change. It requires discipline and processes for the developer to be aware of the changes in the APIs, and keep the mock behavior up-to-date with the real service behavior. Unfortunately these kinds of manual processes tend to be fragile.
Tests that pass with the local handwritten mocks do not guarantee that it will pass integration test in CI/CD pipeline. Why? Handwritten mocks get stale real quick.
Then what is the solution?
An ideal solution would support the following capabilities:
Mock code should not have to be handwritten.
Mocks should not require constant human supervision to stay current. Mocks should update automatically when the upstream services are changed.
If the test passes locally, then it must pass CI/CD -- unless there is a change in upstream services in-between the local run and the CI/CD run.
Mesh Dynamics addresses the challenges discussed above. By listening to traffic in CI/CD runs, Mesh Dynamics can continuously generate up-to-date mocks for all services. JUnit annotation driven service test code can help greatly reducing the effort of writing/maintaining the JUnit code, and this can be used for service tests focusing on verifying the business capability and service interactions without worrying about the complex end-to-end system.
Enough talk. Want to see how it really works? Part 2 of the article dives into code example for how this can be achieved.