In this post I want to propose a workflow ideally suited for the development of containerized microservices written in .NET C#. A similar approach can be taken for other development environments such as Java, Python or Node JS.
The code for this project can be found on GitHub: https://github.com/gnschenker/dotnet-microservice-docker and here is part 2 of the series: https://gabrielschenker.com/index.php/2019/10/09/a-docker-workflow-for-net-developers-part-2/
Boundary Conditions
I am making the assumption that we are going to develop a distributed application consisting of several microservices. These microservices are loosely coupled among each other and either use the request-response pattern for communication or use an event driven architecture.
A microservice has a few characteristics that should apply to really qualify it to be a microservice:
- it does exactly one thing and it does it very well
- it has a well defined API or public contract
- it is a black box, that is, its implementation doesn’t matter from an outside perspective
- an implementation of a microservice can be thrown away and seamlessly replaced by a new implementation that honors the original API or contract
- it is simple in a sense that its context fits well into the head of the developer(s) responsible for it
In this post we are going to implement a sample microservice in .NET Core 2.2 in C#. We are using test driven development (TDD) to do so. The microservice will be running in a Docker container and ultimately be deployed to Kubernetes in staging and production.
The microservice needs to be designed from the get go, to allow for zero-downtime deployments, be it rolling updates or blue-green deployments, to just name two upgrade methods.
A few Container related Things
As the title of this post states, we want to define a friction-less development development process where containers are leveraged to support the developer and not the contrary, add more overhead!
During the development, testing and continuous integration one of the most underrated tools is docker-compose
. Used the correct way it is an awesome productivity tool and we will use it extensively in our example. docker-compose
shines when dealing with a distributed application consisting of multiple services, each running in a container. The tool uses a YAML file to declaratively define how the multi-service app looks like. The default name of this file is docker-compose.yml
, but any other meaningful name can be chosen. Each service in this file gets a friendly name, which will be the DNS name under which it can be reached from the other containerized services running on the same Docker network. Also note that docker-compose
automatically creates a new network on which all containers will run, if you do not explicitly define networks yourself.
Before we continue let’s create a project folder. Let’s name it demo-project and navigate to it:
$ mkdir ~/demo-project && cd ~/demo-project
Inside this folder create our first docker-compose.yml
file with the content below. It will be our starting point for the project:
Notice a few things about this definition:
- Docker Compose comes in version
2.x
and3.x
. It is important to notice that Docker compose files used for development, test and continuous integration (CI) should use version2.x
and not3.x
. The latter are used when deploying to either a Docker Swarm or Kubernetes, that is, a cluster of Docker Hosts. The two versions have slightly different features due to the nature of their destination.2.x
is for single node deployments, that is your development, test or CI system.3.x
is for cluster deployment. - Our sample file defines exactly one service called
db
. This will be the name under which other services can find and access the service. - The service is a Postgres database and is using the image
postgres:12.0-alpine
to create container instances. - We want Postgres to store its data in a named volume (line 6). The volume name is
pg-data
and it has to be defined in the file as such (lines 15 and 16) - We want to pass some database initialization file(s) to Postgres, which it will run upon first start (line 7)
- We tell Docker to map the container port
5432
on which Postgres is listening for incoming request to any free host port (line 9). We could technically define a fixed host port, but if there is a chance to run multiple instances of Postgres on the same host (which during development can happen) this could result in port conflicts. We open this port to the host such as that we can use tools likepgAdmin
that run on the host access Postgres that run inside a container. - On lines 11 to 13 we are defining a few environment variables whose values will be used by Postgres upon start. In our case we want to use a database
sampledb
with a userpguser
having passwordtopsecret
. These values will have to be used by any app that wants to access Postgres as part of their connection information.
Now we are going to define a database initialization script such as that we have some pre canned data to play with. In the project folder create a subfolder db/init
and add a file called init-db.sql
to it. To this file add the following DDLs:
With this script we’re creating a simple table hobbies
and add 6 sample hobbies to it.
Now that everything is ready start the application with docker-compose up -d
.
Once the image is downloaded and the application started we can get the status of it by using the command docker-compose ps
. We should see something like this:

docker-compose ps
Notice the port mapping in the last column of the above output. Port 5432
on which Postgres is listening is mapped to host port 32770
in my case. In your case it may be a different port in the same range. If I want to use now a tool such as pgAdmin
I need to provide port 32770
in the connection settings.
Let’s now look at the logs generated by Postgres. Use the command docker-compose logs
to access the aggregated log of the application (in our case consisting only of a single service db
). In the output you should find information about the initialization process of the database. Look out for the table creation and insert statements that have been executed as part of the initialization script we defined.
Exercise: Install pgAdmin
if you haven’t done so and then connect to your Postgres database (make sure you select the correct port!). Locate the table hobbies
and list and edit its content.
Scaffolding a .NET Microservice
Run dotnet --version
to double check that you have .NET Core 2.2.x installed. If you do then – from within the project folder run dotnet new webapi --name api
to scaffold a WebAPI based microservice called api
. The code will then be placed in the subfolder api
of our project folder.
If you happen to have .NET Core 3.0 installed, you can follow along but there are some minor breaking changes between the versions that you find documented here.
By default the api
is configured to redirect and use SSL. For this sample we do not want to complicate things and just either comment out or remove the statement app.UseHttpsRedirection();
in the Startup.cs
file on line 44.
Execute dotnet run
to test the application. You should see this:

api
microservice on the hostAs we can see, the service is up and running and listening on port 5000
. From another terminal window use the command curl localhost:5000/api/values
to test if you get a result back. You should get ["value1","value2"]
.
Great. Stop the application by pressing CTRL-c
.
Now we want to debug the application, and we are using Visual Studio Code for this. From within your terminal open the api in VS Code with this command: code ~/demo-project/api
.
When you do this the first time VS Code may install some .NET dependencies. Wait until the editor is ready. Put a breakpoint on line 17 of the ValuesController
. Switch to the Debug view of VS Code and make sure to select the pre-defined launch task called .NET Core Launch (web) from the dropdown as indicated below:

Hit the green arrow to start debugging. Then execute curl localhost:5000/api/values
again from your terminal and observe how the debugger is stopping execution at the breakpoint. You can now watch variables and step line by line through the code.
This is the preferred way of doing line-by-line debugging. We can also configure the system to debug inside a container but that’s a bit more tricky and should not be our first goal.
Creating a Dockerfile
Before we can run our microservice from within a container we need to make sure that Kestrel, the .NET webserver does not listen on localhost
but on 0.0.0.0
, that is on all endpoints. Otherwise no connections from outside the container can be made to the web server. To do this open Program.cs
and add .UseUrls("http://0.0.0.0:5000")
in the CreateWebHostBuilder
method.
Now we create a Dockerfile for our api service. Add a file called Dockerfile
to the api
folder with the following content:
We are using the SDK of .NET Core 2.2 in this very first Dockerfile. No worries, once we’re ready for deployment we will only use the runtime to reduce the footprint of the container.
Notice how we first only copy the project file into the container (line 3) and then do a restore (line 4) and finally copy all remaining files into the container (line 5). This is to avoid long build times each time code changes. We know that dotnet restore
usually takes a long time and we also know that the project file changes rarely. Thus this special ordering of commands helps us to optimally use the Docker build cache.
Now in your terminal from within folder api
build the image, let’s call it acme/api:1.0
with the command docker image build -t acme/api:1.0 .
– do not forget the period at the end of the command.
Once the image is successfully built, we can run a container from this image with:
docker container run --name api -p 5000:5000 acme/api:1.0
The output of the above command will tell us that the web server is listening at http://0.0.0.0:5000
, which is what we wanted. Since in the above docker run command we have mapped container port 5000
to host port also 5000
we can use curl localhost:5000/api/values
to test the microservice running in the container. The answer once more should be ["value1","value2"]
.
Before you proceed, stop the container with docker container rm -f api
.
Preparing for Test Driven Development
In the project folder create a file docker-compose-ut.yml
and add the following content:
In comparison to the previous Docker compose file we have
- added a second service called
api
. It uses our Docker imageacme/api:1.0
(line 16) - we are mapping the source of the microservice in host folder
api
into theapp
folder inside the container to achieve live update of the code (line 18) - we are mapping container port 5000 to the host port also 5000 (line 20)
- finally we override the CMD from the Dockerfile with another command, here
dotnet watch run
(line 22). The latter command will automatically restart the microservice each time a code change is detected.
Let’s now run our application with docker-compose -f docker-compose-ut.yml up -d
. Notice how we use the -f docker-compose-ut.yml
parameter to tell docker-compose
to use a specific compose file. Use curl
once more to test that the microservice works.
Now change some code, e.g. on line 17 of the ValuesController
class return an array of three values instead of just two. Save your changes. Use the command docker-compose -f docker-compose-ut.yml logs
to retrieve the logs. Notice the following:

Here we have the proof that dotnet watch run
works as expected and automatically restarted our microservice inside the container. Use curl
to verify that the result of the call now indeed return 3 instead of two values.
OK, with that we just have established an edit and continue process involving containers. Next step will be to add a test assembly to the project and then adjust the Docker compose file such as that the tests are continuously run while we;re developing. Let’s start with the former. First stop the application, if it is still running, with
docker-compose -f docker-compose-ut.yml down -v
.
In your terminal make sure you’re still in the project folder demo-project
and then execute dotnet new xunit --name tests
. This will scaffold a unit tests project in the subfolder tests
. Since we want to test code that is in the api
project we need to add a reference to that project to our new tests
project. To do so execute dotnet add tests reference api
. Finally we need to add another NuGet package to the tests
project with dotnet add tests package Microsoft.AspNetCore.Mvc.Core
. With this we’re ready to roll.
Delete the sample test class that is part of the tests
project and add a new file ValuesControllerSpec.cs
instead with the following content:

Now in the terminal, still in the project folder execute dotnet test tests
to execute all tests defined in tests
. You should see something like this:

Now let’s change the docker-compose-ut.yml file that it continuously (re-) runs all the unit tests if some code changes either application code in the microservice or test code. Adjust the volume mapping part and the command of the api
service such as that they look like this:

docker-compose-ut.yml
file to continuously run unit testsRun the application with this new definition:
docker-compose -f docker-compose-ut.yml up -d
Now follow the logs of the api
service with:
docker-compose -f docker-compose-ut.yml logs -f api
In the logs you will see that the tests have been execute successfully. Now edit the Get
method of the ValuesController
class to look like this:

ValueController
returning an OkObjectResult
The modify the ValuesControllerSpec
class such as that it has the following two facts:

ValuesController
Notice that each time when you save modified code, the dotnet watch test
command restarts by first compiling and then executing all the unit tests. The result is now a really friction free and continuous TDD experience.
Conclusion
In this post we have established a process for frictionless test driven development using Docker containers. We have exploited the flexibility that docker-compose provides to us to setup a system that continuously updates code inside the container and re-runs tests whenever changes in the application code or in the test code is detected. The developer can now concentrate to crank out code and tests for that code.
Here is part 2 of the series: https://gabrielschenker.com/index.php/2019/10/09/a-docker-workflow-for-net-developers-part-2/