How to test a microservice

Albert Starreveld
10 min readFeb 5, 2020

--

Manual testing and manual deployments are a thing of the past. Especially with modern architectural patterns like event-driven architecture. Microservices architecture tends to result in complex IT-landscapes that host different versions of all sorts of applications, all the time. Many microservices have dependencies on cloud services, too. So how can you build and test a microservice in a cheap, fast way and be sure everything works?

Automated testing seems an obvious solution. But the amount of test-code exceeds the amount of production code before you know it, and maintenance of both the software and the tests causes the time to market to increase.

This blog explains how to test a single microservice. The software architecture of a microservice has a big impact on how to test it. In my previous article, I explained how to apply Domain-Driven Design, Command Query Responsibility Segregation and Ports & Adapters in a real application to make it testable. You’ll find working code samples of this and the concepts explained in this blog on GitHub.

Developer comfort

In the pre-cloud-era, it was easy to work on an application on any computer. All a developer needed was a local database instance with test data.

Cloud is a game-changer. A modern application has all sorts of dependencies ranging from Azure Service Bus to Blob Storage or Key Vault. Several emulators are available to emulate them on your local computer. But in case of an application with many dependencies, that requires installing numerous applications, configuring and updating them.

Having small (test)scripts that execute small parts of the application is developer-friendly. These tests scripts execute small bits of the software, with mocked dependencies. There’s no need to set up all sorts of emulators and configuration to get feedback. That makes debugging and working on an application simpler than it ever was.

Where to get started?

Using automated tests to make sure an application works isn’t risk-free. If done wrong, there’s still a big chance the application won’t work. Use different tests with different granularities to solve that issue. Apply the testing pyramid:

Unit-tests test individual classes in isolation to validate details of the application. Will the application allow some parameters to be null, for example? Or will the class add or subtract numbers? Unit-tests test business logic. And as a result, most of the tests in the project are going to be unit tests.

Everybody knowns unit tests aren’t a silver bullet. A unit-test doesn’t make sure a unit has been used at all. Or if the classes that do can process the values they receive. Construct multiple units together and invoke methods on a combination of units to validate such details. That’s component- or integration-testing.

A component-test validates if a combination of units put together can work. But the same applies to the components themselves: Can they, put together, work? Invoke multiple components together to test that.

The following image illustrates the different tests and their scopes:

Image 1.) Tests with different granularity

How to interpret the results

Testing is an information-gathering activity done to support decision-making. And usually, it’s the stakeholder who has the final say. He or she decides to ship the software to the production environment.

A passing unit-test is meaningless to a stakeholder. Business people don’t understand WhenContextNull_ShouldThrowArgumentNullException. Testing a rule on a small fragment of the application without any context doesn’t guarantee that a business requirement is implemented functionally correct. Who says the unit is used at all?

Usually, a passing, holistic test is more comforting for testers and stakeholders. A passing end-to-end-test will make them feel more confident about the correctness of the application. Those are the tests that convince the stakeholders.

So why bother making unit tests?

End-to-end tests are black-box tests that test the functional requirements of the application.

Unit-tests and component-tests are white-box-tests that test both functional- and technical details of the application. Engineers need them to validate the individual components of the application and to work on them more efficiently.

The scope of a unit-test

There are different approaches to unit-testing. In this codebase, a unit test always tests one single class, in isolation. All dependencies are mocked. Unit-tests test fragments of the application.

Common unit-testing pitfalls:

  • Don’t write mocks and stubs yourself. Use frameworks like NSubstistitute or Moq instead.
  • Don’t hard-code test-data. Use frameworks like AutoFixture to dynamically generate data instead.
  • Don’t test all methods of a unit in a single unit-test file. That test will get huge. Instead, create a folder with the class and add one test per method.

Writing a unit test isn’t as simple as decorating a method with a [TestMethod] attribute. Use the 5 unit testing guidelines as a guide for creating fast, maintainable unit tests.

Review some unit tests here, here and here.

The scope of a component-test

Debugging a microservice can be daunting. Setting up (emulated) cloud-components with applicable test data for the use case your working on can be challenging. Especially when you’re sharing an environment with coworkers. Creating automated component tests with test-data that run locally makes working on a microservice a lot easier.

Approach every project in a solution as a swappable, autonomous, testable unit. From that perspective, a project must have an interface. A secondary port is the interface of a component. (described here). A component is a concrete implementation of a secondary port, including all of its dependencies.

The dependency tree

A concrete implementation of an interface potentially has a lot of dependencies. And its dependencies might have dependencies too. And so forth. A dependency-tree can explode, resulting in a huge setup of a unit test.

As described earlier, every project has a dependencies.cs-file that registers every dependency. It makes it easy to bootstrap a single project and have the dotnet framework. The dotnet-framework can do the heavy lifting for you:

Image 4.) Resolving an instance of a subject under test using a serviceprovider

Note that in this example, some dependencies are overwritten. They are the parts that cross the application boundary. An example is the dbContext or an API call to a third-party API. Mock it to prevent the test from mutating data.

Common component-testing pitfalls:

  • Manual bootstrapping. Instantiating instances of every dependency of a component manually isn’t that bad at first. Until dependencies are added. And some more are added a couple of months later. After a while, the setup of the test grew bigger than the test itself and the maintenance of the test becomes a reason not to refactor any more. Resolve dependencies automatically instead!

Review a component test here, here and here.

Ending the horror of end-to-end testing

Testing whether a microservice integrates with the cloud-components it depends on, can only be done using a test environment. But new features should not be merged to master before it is compliant with every requirement of the software. And a microservice should only be deployed to a test environment from the master branch. That’s where end-to-end-tests come in to play. Running them in the CI-pipeline guarantees the software to be working functionally correct. All that’s left to validate are permissions and configuration.

Make end-to-end-tests readable

An end-to-end test is the documentation of the system. It should match the requirements and the Product Owner and the testers need to validate that. They don’t care for complex code. So, use SpecFlow to decouple the test-cases from the test-code and write test-cases as such:

A readable test

The text written in this picture is stored in a .feature-file. The SpecFlow framework attaches lines of text in .feature-files that start with either “Given”, “When”, or “Then” to C# code. It looks for methods with attributes known to SpecFlow. These methods are called Glue-Code and look like this:

Glue-code

Review the full code here.

Use an in-memory database for your end-to-end-tests

One of the biggest issues of end-to-end-testing is to get usable test-data into the database. Many companies use SQL scripts to set the database to a usable state. Others use copies of the production-database because if that works it will surely work in production.

This approach requires a lot of maintenance. The Glue-Code gets cluttered with SQL queries trying to find entities in the database. Separating the test-data from the tests causes tests to fail because they can’t find the data they need, with false positives as a result.

Define the test-data in the test-code. Swap the connection to a real database with a connection to an in-memory replacement and insert the test-data into it and use another test to validate the schema of the database, and the application's ability to connect to it. Use Glue-Code to populate it with the applicable data for a particular test.

An end-to-end-test usually requires more than one table in the database to contain data. The Glue-Code becomes unreadable if every property of every record of every table is defined in the Glue-Code. To prevent that, create classes that help build up the test data. (Review sample code here.) Collect the test-data in a test-context and use that to invoke the application. (Review sample code here)

As a result, this is what the Glue-Code will look like:

Setting up test-data and invoking an event-driven-application

Use dependency injection to resolve the subject under test

And end-to-end-test tests a combination of components. Each component contains multiple classes. Altogether, the dependency-tree of the subject under test is huge. Not to mention event-driven architectures, in many cases, it’s not even a single unit that’s under test. Resolving all that would create a massive set-up. The set-up would be too complex to maintain.

Instead, wire up the application in your end-to-end test context as such:

Wiring up the application in an app.

And resolve the subject under test using the dependency injection tools from the dotnet framework:

Resolving a unit under test

Review the complete code-sample here.

The scope of an end-to-end-test

The scope of an end-to-end test is a concrete implementation of a primary port, including all of its dependencies. They’re black-box tests that validate the functional requirements of a system. They invoke the system with some input, and they wait until there’s output.

Common end-to-end-testing pitfalls:

  • Often, when the application under test has an API, testers fire up an instance of the API and execute REST calls to an endpoint and validate the response. But that’s slow. Every call takes at least a second. Neither does the subject under test have an isolated context. Every test should be able to have its own runtime environment to able to create every imaginable test-case. Setting up an isolated context for a test makes the tests even slower.
  • Testing business requirements, permissions, and configuration in an end-to-end test. Permissions and configuration are different in every environment. Testing these things in end-to-end-tests forces the team to create many of those which results in a slow test-suite. Permissions and configuration are different in every environment. Testing business requirements in end-to-end-tests require data to be mutated. That makes them useless in a production environment. Separating permission- and configuration-tests makes the test-suite faster and usable in every environment.
  • Making the Glue-Code too complex. Many code-bases contain SpecFlow bindings that wire up the application and setup test-data. That violates the single responsibility principle. Treat test-code like production-code. Apply SOLID and use design patterns to keep the test-code maintainable.

How automated tests change the way a developer works

This project structure and way of testing allow the developer to stop working on a deployed instance of an application, but instead, work test-driven:

  • Start with gathering the requirements of the system and translate them into a Given, When, Then scenario.
  • Next, design the domain entities and services and unit-test those.
  • Implement the secondary port. Start by implementing every mapper, proxy, and so forth, and unit-test those.
  • Write a component test to validate if these components work as a whole.
  • When everything is implemented, write the Glue-Code of the end-to-end test to see if the system works as a whole and to see if it complies with the business rules of the application.

Summary

Microservices have several dependencies in the cloud. Having a separate test-environment for every developer, or emulating one, can be costly. There’s no need for that. Use ports and adapters architecture to make everything testable. Implement Fowler’s testing pyramid with unit-, component- and end-to-end-tests to make sure the application works before merging it to master.

There are more test-cases than lines of code. That means there will be more test-code than production code. Apply coding practices to keep the test-code maintainable. Use abstractions and dependency injection.

A unit is a single class. A component is an implementation of a secondary port and all of its dependencies and end-to-end-test tests a primary port and all of its dependencies.

This repository contains the code of a project in which all of these tests have been implemented. Feel free to copy the code and use it. (No warranty..)

--

--

Albert Starreveld

Passionate about cloud native software development. Only by sharing knowledge and code we can take software development to the next level!