In this article we will learn how to run Apache Kafka as Docker image using Docker Compose, which allows to manage multiple Container Images. Firstly, we will show how to run Kafka with Zookeeper and Docker. Then, we will show how to run Apache Kafka KRaft with Docker.
Starting Kafka and Zookeeper using the Strimzi images
This is an example of a Docker Compose YAML file which allows you to start local Zookeeper and Kafka using the broker images. It uses the Strimzi container images.
After installing Docker / Docker Compose on your machine, add the following docker-compose.yaml file:
version: '2' services: zookeeper: image: quay.io/strimzi/kafka:0.36.1-kafka-3.5.1 command: [ "sh", "-c", "bin/zookeeper-server-start.sh config/zookeeper.properties" ] ports: - "2181:2181" environment: LOG_DIR: /tmp/logs kafka: image: quay.io/strimzi/kafka:0.36.1-kafka-3.5.1 command: [ "sh", "-c", "bin/kafka-server-start.sh config/server.properties --override listeners=$${KAFKA_LISTENERS} --override advertised.listeners=$${KAFKA_ADVERTISED_LISTENERS} --override zookeeper.connect=$${KAFKA_ZOOKEEPER_CONNECT}" ] depends_on: - zookeeper ports: - "9092:9092" environment: LOG_DIR: "/tmp/logs" KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092 KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
Within the configuration, we are using the following environment variables:
KAFKA_LISTENERS
is a comma-separated list of listeners, and the host/ip and port to which Kafka binds to on which to listen. For more complex networking this might be an IP address associated with a given network interface on a machine. The default is 0.0.0.0, which means listening on all interfaces.
KAFKA_ADVERTISED_LISTENERS
is a comma-separated list of listeners with their the host/ip and port. This is the metadata that’s passed back to clients.
KAFKA_ZOOKEEPER_CONNECT
refers to the Zookeeper node which is running on port 2181
You can run the docker-compose file as follows:
docker-compose up
You should see that the cluster is up and running:
Testing a Kafka Cluster running in a Container
To test the above Kafka Cluster, we can run the following commands on the Docker Container Image. Firstly, gather your Kafka Cluster Container Id:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 669552dd7ac6 quay.io/strimzi/kafka:0.36.1-kafka-3.5.1 "sh -c 'bin/kafka-se…" 44 minutes ago Up 3 seconds 0.0.0.0:9092->9092/tcp, :::9092->9092/tcp tmp_kafka_1 56fc023fdb9f quay.io/strimzi/kafka:0.36.1-kafka-3.5.1 "sh -c 'bin/zookeepe…" 44 minutes ago Up 3 seconds 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp tmp_zookeeper_1
Then, to create a Topic “my-topic” you can run the following docker exec command:
docker exec -it 669552dd7ac6 ./bin/kafka-topics.sh --create --topic my-topic --partitions 1 --replication-factor 1 --bootstrap-server localhost:9092
Next, to start a Producer on that Topic execute the following command:
docker exec -it 669552dd7ac6 ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-topic
You can add a Message on the Console, for example:
Finally, execute the following command to Consume messages on the topic “my-topic”:
docker exec -it 669552dd7ac6 ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic --from-beginning
As you can see, we are able to consume Messages successfully:
How to run Apache Kafka with Docker in KRaft Mode
The latest versions of Apache Kafka can run without Zookeeper. This is also known as Apache Kafka in KRaft mode. This article discusses more in detail the topic: How to run Apache Kafka without Zookeeper
When using Docker Compose, you can use the following docker-compose.yaml file to run a Zookeeper-less version of Apache Kafka:
version: '2' services: kafka: image: quay.io/strimzi/kafka:0.36.1-kafka-3.5.1 command: [ "sh", "-c", "./bin/kafka-storage.sh format -t $$(./bin/kafka-storage.sh random-uuid) -c ./config/kraft/server.properties && ./bin/kafka-server-start.sh ./config/kraft/server.properties" ] ports: - "9092:9092" environment: LOG_DIR: "/tmp/logs"
Run again the file with:
docker-compose up
Here is the KafkaRaftServer in action:
Conclusion
In conclusion, running Apache Kafka with Docker provides an efficient and convenient way to experiment, develop, and test Kafka clusters without the complexities of setting up a full production environment. Docker’s containerization technology encapsulates Kafka’s components, making it easy to orchestrate and manage.
Found the article helpful? if so please follow us on Socials