In this tutorial we will learn how to get started with Apache Kafka. We will create a simple set up with a Single Broker Kafka Cluster and produce/consume messages with it.
Apache Kafka architecture
In Kafka, there are three types of clusters:
• Single node–single broker
• Single node–multiple broker
• Multiple node–multiple broker
The following diagram depicts an example of a single node – single broker cluster:
Firstly, download and unzip Apache Kafka from https://kafka.apache.org/downloads
Then, unzip the binary build in a location of your likes, before moving to the next section.
Starting the Zookeeper Server
Next, start the ZooKeeper server. Kafka provides a simple ZooKeeper configuration file to launch a single ZooKeeper instance. To install the ZooKeeper instance, use the following command:
Check from the Console logs that Zookeeper is up and running:
[2022-05-10 19:33:16,682] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) [2022-05-10 19:33:16,683] INFO zookeeper.request_throttler.shutdownTimeout = 10000 (org.apache.zookeeper.server.RequestThrottler) [2022-05-10 19:33:16,694] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) [2022-05-10 19:33:16,695] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider)
The following are the main properties defined in zookeeper.properties:
- dataDir : The data directory where ZooKeeper is stored (default /tmp/zookeeper)
- clientPort : The listening port for client requests. By default, ZooKeeper listens in the 2181 TCP port
- maxClientCnxns : The limit per IP for the number of connections (0 = unbounded):
Starting the Kafka Broker
You can start the Kafka Broker using the default configuration (config.properties)
NOTE: If you are using localhost as bind address, we recommend setting the advertised.listeners property as follows otherwise, it will use the function java.net.InetAddress.getCanonicalHostName as return value:
Next , start the Kafka broker with the following command:
Finally, check from the Console logs that also Kafka is up and running:
[2022-05-10 19:34:46,047] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started socket server acceptors and processors (kafka.network.SocketServer) [2022-05-10 19:34:46,052] INFO Kafka version: 3.1.0 (org.apache.kafka.common.utils.AppInfoParser)
The following are the main properties you can configure in server.properties:
- Broker id: The unique positive integer id for each broker.
- Port: The port where the socket server listens on (default port=9092).
- Log dir: The directory to store log files (default log.dir=/tmp/kafka10-logs).
- Num partitions: The number of log partitions per topic (default num.partitions=2).
- ZooKeeper connect: The ZooKeeper connection URL (default zookeeper.connect=localhost:2181)
Creating a Topic
Kafka has the create command to create topics. Let’s create a topic called myTopic with one partition and one replica:
bin/kafka-topics.sh --create --topic myTopic --bootstrap-server localhost:9092
Next, to display information, such as the partition count, you can use the describe option:
bin/kafka-topics.sh --describe --topic myTopic --bootstrap-server localhost:9092 Topic: myTopic TopicId: npLmDgyLTPWFN1aYVBNFxg PartitionCount: 1 ReplicationFactor: 1 Configs: segment.bytes=1073741824 Topic: myTopic Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Starting a Producer
You can use the kafka-console-producer shell script to start producers . It accepts input from the command line and publishes them as messages. By default, each new line will be a new message:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic myTopic
Now type the following:
Hello World! [Enter]
Starting a Consumer
Kafka has the kafka-console-consumer script to start a message consumer client . It shows the output at the command line as soon as it subscribes to the topic in the broker:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic myTopic --from-beginning
you see the following output:
This article was a walk through Apache Kafka set up. We have showed how to install and bootstrap a single cluster node. Then we have learnt how to create a Topic with a producer and a consumer.