Docker uses software defined networks to protect applications running in containers from unauthorized access. All containers that run on the same network see each other and can communicate freely with each other. Every application not running on the same network if firewalled off.
Sometimes during development or testing we want to access application services running in containers on a Docker network with some tools. Usually we run those tools directly on the host, that is our development machine. But in that case the tool won’t run on the same Docker network as the service we want to play with. Thus communication between tool running on the host and service running in the container is not possible. What options do we have?
One option is to open container ports and map them to host ports. This way the tool can access the containerized service. Let’s say we have a Kafka broker running inside the container. The broker listens at port 9092 for any incoming connection. Thus we can map port 9092 to the equivalent host port. But that is not ideal. In production we should only ever map ports behind which a public API or a public facing Web Server is listening to the respective host. Other services such as Kafka brokers should never be exposed to the public. Do we really want to run Kafka differently in development or test than in production. I say no! OK, then what?
In development or test we can run a
tools or bastion container on the same network as all the other services. We can then use Docker Compose to execute into this
tools container and run our test tools from within it. Since the
tools container is part of the Docker network applications or tools running in it can see all the other services running in containers on the same network.
Create a folder
bastion and in your terminal window navigate to it. Add a file called
docker-compose.yml to this folder and give it this content:
You will certainly notice that this is the same content as we were using in the previous post.
Run the application with
docker-compose up. Use
docker-compose ps to verify that all containers are up and running. If you have problems with the application, e.g. one of the containers is not going into state Up, then try to use
docker-compose logs to access all the logging information produced by ZooKeeper, the Kafka broker and the tools container.
Once all containers are up and running use
docker-compose exec tools /bin/bash to execute a Bash shell inside the
tools container. You should be greeted by a prompt like this:
This is the confirmation that you are indeed in a shell running (as user
root) inside the
tools container. Now from within that container we can execute any Kafka tool to access the broker or the ZooKeeper instance. We have most of the available Kafka tools available to us in the
tools container since it is instantiated from the
confluentinc/cp-enterprise-kafka:5.3.0 Docker image which has all these installed. Let’s use the
kafka-topics command line tool to create a topic in the broker:
As we have defined it, the topic is called
demo-topic and has 3 partitions and replication factor 1. We can now use another tool called
kafka-console-producer to write some test data into the topic. Use this command to run the console producer:
Enter a few strings such as
nut. After each string press the enter key. When you’re done entering data, press
CTRL-d to exit the producer. Your terminal should look similar to this:
Now we want to use yet another tool called
kafka-console-consumer to read data that is stored in the topic. Use this command to do so:
After a short moment the data you entered should appear on screen. Please notice that the data is not necessarily appearing in the same order as you entered it. This is due to the fact that our topic has 3 partitions. The data is distributed across those 3 partitions in a round-robin fashion. Ordering is only guaranteed on a partition level and not globally. But that is a subject for another time.
CTRL-c to exit the consumer. Then press
CTRL-d to exit the
tools container, and finally execute
docker-compose down -v to tear down the application.
In this post we have used a special tools container, running on the same Docker network than all the application services, as a bastion to run test against the application service. We have demonstrated this possibility using various Kafka command line tools.