Sending and Receiving messages

Let’s learn how key solves the problem of messages to random partitions.

Before starting this tutorial, please create a Topic first. Learn How to create a Topic in Kafka.

  • The partitioner first checks whether the key is present as part of the message or not. In this example, we are not sending any key, so in this case, the partitioner will use the Round Robin algorithm to send the messages.


  • Spring is a Dependency Injection framework to make java applications loosely coupled.
  • Spring framework makes the development process easy for JavaEE applications.
  • Spring enables you to build applications from “plain old Java objects” (POJOs) and to apply enterprise services non-invasively to POJOs.
  • Spring was developed by Rod Johnson in 2003.

Introduction to Spring Framework

Spring Framework
Spring Framework
Official — Spring Framework

I) Core Container

The Core Container consists of the Core, Beans, Context, and Expression Language modules.

→ It consists of 4 modules viz Core, beans, Context, spEL. These 4 modules provide the fundamentals of spring. It means on this module only the complete spring is based.

Core and beans provide the fundamental part…


Secrets of Kafka console producers

Yes, it is possible to create a topic using console producer. But this is really a bad idea to create a topic. Read the blog below for details.

Do Kafka console producers can also create topics?

— Yes, it is possible but not a good idea to create a topic in this way.

kafka-console-producer.sh --bootstrap-server 127.0.0.1:9092 --topic new_topic


Creating and Deleting the Topics

Learn how to create and delete the topics.

Getting Started


Spring KafkaTemplate

Kafka template is a class that is part of the Spring to produce messages into the Kafka Topics.

Overview

→ Think KafkaTemplate as a JDBCTemplate for database interactions.

The KafkaTemplate wraps a producer and provides convenient methods to send data to Kafka topics. The following listing shows the relevant methods from KafkaTemplate:

— If you look below it has many different overloaded versions of KafkaTemplate with send method.

ListenableFuture<SendResult<K, V>> sendDefault(V data); ListenableFuture<SendResult<K, V>> sendDefault(K key, V data); ListenableFuture<SendResult<K, V>> sendDefault(Integer partition, K key, V data); ListenableFuture<SendResult<K, V>> sendDefault(Integer partition, Long timestamp, K key, V data); ListenableFuture<SendResult<K, V>> send(String topic, V data); ListenableFuture<SendResult<K, V>> send(String topic, K key, V data); ListenableFuture<SendResult<K, V>> send(String topic, Integer partition, K key, V…

Internal Working of Kafka Template.

Do you have an understanding of how Kafka Template works internally?

The KafkaTemplate.send() is going to send the message to Kafka. But in reality, it goes through different layers before the message is sent to Kafka.

— The very first layer is the serializer. Any records sent to Kafka need to be serialized to bytes.

There are two serialization techniques that need to be applied to new records.

  1. key.serializer
  2. value.serializer

— This configuration is mandatory for any producer. The clients need to provide the key.serializer value and value.serializer value.

The Kafka client java libraries come with some predefined serializers.

— The second layer is partitioner. This layer determines which partition the…


Consumer Group

The consumer group-id is mandatory, it plays a major role when it comes to scalable message consumption.

  • To start a consumer group-id is mandatory.
  • group-id plays a major role when it comes to scalable message consumption.

let’s consider we have a topic test-topic and 4 partitions. Now we have a consumer-ready with group 1. we have a single consumer pulling all the partitions in the topic and processing them.

The pull loop is always single-threaded, so in this case, a single thread is going to pull from all the partitions.


Handling Fault Tolerance

Make sure that 3 instances of brokers are running along with the zookeeper and server.

  1. Run the Kafka producer
.\bin\windows\kafka-console-producer.bat --broker-list localhost:9092 --topic test-topic-replicated

Kafka Cluster distributes the Client Requests between brokers

Before we get into the details of how Kafka distributes a client request, we will discuss how Topics are distributed.

We have a zookeeper and Kafka cluster. In this example, we have a cluster with 3 brokers.

  • out of the 3 brokers, one broker will behave as a Controller. Normally this is the first broker to join the cluster. Think of this as an Additional role for the broker. Now we have the environment completely set.
  • Now its time to create a topic and create-topic command is issued to the zookeeper. The zookeeper takes care of redirecting the request to the controller.


Handling Data Loss.

Learn how Kafka handles data loss in the event of failure.

Here we have a Kafka cluster and a representation of how the topic is distributed across the cluster and we have some records present in the file system.

As we all know clients are producers and consumers always talk to the leader to retrieve data from the partition.

Let’s say a broker-1 goes for some reason. Right now this is the broker which is the leader of partition-0. All the data which is returned to the partition-0 is residing in the file system of this broker-1. Once it goes down there is no way for the clients to access this…

Sagar Kudu

Software Engineer at HCL | Technical Content Writing | Follow me on LinkedIn https://www.linkedin.com/in/sagarkudu/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store