Part 2: Creating Topics, Consumers and Producers
Part 1: Kafka Core Concepts
Start by Installing Kafka on your dev environment. We would use a landoop’s Kafka Docker image.
$# docker run — rm -it \ -p 2181:2181 -p 3030:3030 -p 8081:8081 \
-p 8082:8082 -p 8083:8083 -p 9092:9092 \
-e ADV_HOST=127.0.0.1 \ landoop/fast-data-dev
You should see the Kafka Development Environment Dashboard by hitting 127.0.0.1:3030
Start a Kafka command line:
$#docker run — rm -it — net=host landoop/fast-data-dev bash
Create a Topic by providing zookeper, number of partitions and replication factor as discussed in Part-1.
root@fast-data-dev / $ kafka-topics — zookeeper 127.0.0.1:2181 — create — topic hello_topic — partitions 3 — replication-factor 1
List Kafka-topics:
root@fast-data-dev / $ kafka-topics — zookeeper 127.0.0.1:2181 — list
Let’s see the created hello_topic in details. You could see the 3 Partitions created along with the elected leader.
root@fast-data-dev / $ kafka-topics — zookeeper 127.0.0.1:2181 — describe — topic hello_topic
Let’s start a producer and insert some data in the producer.
root@fast-data-dev / $ kafka-console-producer — broker-list 127.0.0.1:9092 — topic hello_topic
Check the Kafka Topics dashboard. You should see the data along with the partition and offset it was stored.
Let’s spin a consumer with following command and add more data on the producer. You should see the data on the consumer side.
root@fast-data-dev / $ kafka-console-consumer — bootstrap-server 127.0.0.1:9092 — topic hello_topic
Coming up: Part 3: Kafka Integration in Spring Boot Microservices.