In my previous post I introduced a workflow for .NET Core developers that are developing microservices running in containers. The workflow is meant to reduce the friction introduced by the use of containers and leverage the containers’ benefits instead.

In this part 2 of the series I am going to show how we can further extend to workflow to include integration tests for the microservice at hand.Integration tests often access the system under test (the microservice) via its public API. I will do the same in this post.

Source code accompanying this post is available on GitHub.

I like to use Node and Jasmine JS to write my integrations tests. There is less ceremony and a lot of productivity to gain. But we could as well write the integration tests in .NET Core too, e.g. by using XUnit.

To follow along we can create a subfolder integration-tests in our project folder demo-project. In a terminal window we then navigate to this folder and use npm init to initialize it as a Node JS project. This will create a package.json as an end result, which is roughly the equivalent of the C# project file.

Next we want to install the two node libraries jasmine and axios. The former contains the test framework and the latter is a rest client library we will be using to access the public API of our microservice. Do this with npm install jasmine --save-dev and npm install axios --save-dev. After this, the two external libraries will be listed as dependencies in the package.json file.

We also want to initialize Jasmine for our test project. Execute the command node node_modules/jasmine/bin/jasmine init. This will create a file jasmine.json in folder spec/support that is used to configure Jasmine. You can adjust this configuration to your liking. By default Jasmine expects the tests to be found in a subfolder called spec with a name *[sS]ec.js.

Create a first Smoke Test

It is always a good idea to start with a very trivial test first, to make sure the whole setup works as expected. Thus add a file called api-spec.js to the spec subfolder in the tests project. To this file add the following code snippet:

If you have never seen a Jasmine (or Mocha) test then please refer to the documentation found here. It is a trivial test that makes sue true equals true. With this we can make sure our setup is OK. In your terminal window make sure you’re in the integration-tests folder and then execute npm test. You should see something along the line of:

Running a first smoke test

As we can see, one test (here called spec) has been executed. Zero of the specs have failed. Obviously our setup works as expected.

Now it is time to craft a Docker Compose file that will streamline our integration testing. The goal is to provide the developer a friction free environment where they can concentrate on writing tests and code instead of having to worry about maintaining plumbing code.

The Docker Compose File

Here I want to present a Docker Compose file, I call it docker-compose-it.yml, that can be used to support continuous integration testing. The developer can edit the code of the microservice which will trigger a restart of it. They also can modify or add integration tests and trigger a re-run of all tests when changes are saved. Here is the file:

There is quite a few interesting points in this Docker Compose file that I want to discuss, so that you can fully understand their purpose. Let’s start with the db service, our database. We have defined a healthcheck on lines 14 through 19. The health check is used to provide information about the health of the application running inside the container. In this case the application is Postgres and we have defined a very simple check, whether or not the database is listening at port 5432. We are using the Linux tool netcat (or short nc) to do so. If the test command returns a non zero value Docker reports the container as unhealthy. Lines 16 through 19 define how often the health check shall happen (line 16), how long the check shall wait until it times out (line 17), how many times the check can fail (line 18) before Docker reports the container as unhealthy and how long of a grace period the container has to startup before the first health check happens (line 19).

Next let’s look at the api service. First focus on lines 24 till 26. Here we define that the api service can only be started once the db service reports as healthy. That makes sense since the api microservice will need to access the database.

The service also defines a healthcheck on lines 32 till 37. This time we are using curl to probe an endpoint of our microservice. We are using the --fail parameter such as that curl returns a non zero return value if the status is no equal 200.

On line 28 we are mounting the source folder into the container such as that we have live update of the code. Together is the command on line 31 this will cause the microservice to auto-restart every time code changes are detected.

The final service is called tests and it is the one which shall contain the integration tests. For live code updates we are mounting the source folder into the container on line 46. The command on line 47 guarantees that all tests are re-run each time some code changes are detected. For this we are using the watch script defined in package.json. This script executes the following statement:

nodemon -L --watch spec --exec jasmine

That is, it uses nodemon to restart jasmine each time something in the spec folder changes. And that makes sense since our tests will be located in that subfolder. Please note the -L (or --legacy-watch) parameter. Without this parameter, nodemon will not pick up the changes of files when running inside a container. Please read here for more details about this issue.

Lines 42 through 44 declare that the tests service can only be started once the api service is up and healthy. That is essential, otherwise the tests will run and fail!

Writing Integration Tests

Finally we can do what is the main goal of this whole post, writing integration tests. But first run the application using the above Docker Compose file. In a terminal window navigate to the project folder demo-project and execute the following command:

docker-compose -f docker-compose-it.yml up

We do not run the application in the background on purpose. This way you can observe the logging messages created by the three services of the application as you go and quickly discover whether or not everything works as expected or if there are problems. Keep that terminal window on the side but visible while you continue.

Open the demo-project in VS Code and locate the file api-spec.js in the integration-tests project. Add the following code snippet at the beginning of the file (line 1):

const axios = require('axios');

We will be using the axios library to make HTTP calls to the microservice.

Notice that when you save any changes to the file api-spec.js, all the integration tests are re-run, exactly as we desired. This should be clearly visible in the logs of the application.

Now we can add a first real test. Your code should look like this:

The new test is on lines 10 through 20. We use axios to access our microservice using the URL defined on line 11. Notice that we can use Dockers’ DNS service to resolve the service name of the microservice as it is defined in the docker-compose-it.yml file, namely api. I have selected axios as my HTTP client since it supports the async await programming model which makes asnychronous code highly readable and maintainable.

The spec on lines 13 through 16 validates that the call to endpoint /api/values returns a status code 200 (OK). The second spec on lines 17 through 20 asserts that the value returned matches the expectations.

Save your changes and make sure all the integration tests are re-run. You should have 3 specs now. If a spec fails, correct the code and continue.

Bonus question: How could the above code be optimized to avoid duplication?

Now try to add another integration test that tests endpoint /api/values/1. How would you do that?

When you’re done with testing simply stop the application with CTRL-c and then run:

docker-compose -f docker-compose-it.yml down

to cleanup your environment.

Running Unit Tests and Integration Tests in Parallel

Notice that you can run unit and integration tests in parallel. The former by using our docker-compose-ut.yml file and the latter by using docker-compose-it.yml. To make sure there is no conflicts when you are running both applications using Docker Compose from within the same folder demo-project you can use the parameter -p (or --project-name) of Docker Compose to define an explicit project name for each app. Thus in a first terminal window, from within folder demo-project run the following command:

docker-compose -p unit-tests -f docker-compose-ut.yml up

and in a second terminal window, also from within folder demo-project this one:

docker-compose -p integration-tests -f docker-compose-it.yml up

This way you can write code, unit tests and integration tests at the same time. Isn’t that a real productivity gain?


In this 2nd post in the series on how to define a workflow for .NET Core developers using Docker, I have shown how to setup a productive and friction free environment to develop integration tests for a microservice. Once again the use of advanced techniques around Docker Compose was a key element in this approach.

Part 1 of the series can be found here and the code accompanying this series of posts is avilable on GitHub.

Part 3 of the series which is about debugging inside a container can be found here.

Leave a Reply