Strimzi — Kafka on Kubernetes

Weng Kee Teh
4 min readApr 8, 2022
strimzi

Many years ago, setting up a Apache Kafka cluster on bare metal servers can be really complicated — storage sizing, determining the number of replication across brokers, number of topics and partitions, sourcing the better performance hardware for high IO operations and possibly high throughput within the internal network, designing the high availability… the list goes on.

Despite that, the benefit of using Kafka as the core of a event driven system is still very much justified even until today. The story of Linkedin sending 7 trillion messages per day on Kafka says it all.

Fast forward to the era where cloud native being the new norm, setting up Kafka has become so much easier. We still very much need to fine-tune the parameters and configurations, but standing up a Kafka cluster could not be easier. In the matter of minutes instead of days or months, you will be able to get your hands on possibly one of the fastest messaging broker in the world. It comes in the name of Strimzi — born with Kubernetes and cloud native development in mind. It also comes with its very own Kubernetes Operator that will orchestrate and streamline the deployment of Kafka clusters, users, topics, Zookeepers and etc.

Strimzi is Kafka on Kubernetes.
Red Hat AMQ Streams is Strimzi.

There is an even easier way of using Strimzi, which is to use Red Hat AMQ Streams. It comes with a nice GUI and done a pretty good job on guiding users to construct the custom resource YAML that will be reconciled by the Operators.

First, you will need to install the AMQ Streams Operator in the Operator Hub page — it’s a fully automated task and it should not take more than a few minutes before it is ready. Move on to the “Red Hat Integration — AMQ Streams” page, which is the place where we can configure everything that AMQ Streams Operator is overseeing. Proceed to click on Kafka tab and then click on “create” to create your first cluster. For first timer, you may leave everything as default.

Before we move on to the next step, what is a Kafka topic? A topic is a place where a Kafka Producer would publish messages to. A topic is also a place where Consumer would subscribe to and read the messages. Multiple producers can produce messages to a topic, and multiple consumers can consume from a topic. Unlike message queue in the JMS world, the message will not be removed right after it is read. The housekeeping of the message is of the responsibility of the Kafka Brokers and it is highly configurable.

To create a Kafka topic, go to the Kafka Topic tab and click on Create KafkaTopic. If you left everything default previously, especially the cluster name in Kafka Cluster creation, you can go ahead and leave everything default to create the topic. The topic that we have just created, should have its status changed to “Ready”, indicating that it is created successfully.

Last step, we will need a simple Producer and Consumer to test the cluster. In Strimzi Github repo, the contributors are kind enough to prepare a few really handy K8s deployment manifest files that we can use straight away to do this. Grab the java-kafka-producer.yaml and java-kafka-consumer.yaml, deploy them into the same Project in Openshift. You can do that by going into Developer mode in OpenShift web console and use Import YAML. Change the topic and cluster name accordingly if you are using different name.

In the java-kafka-producer logs, you should see that it is being mocked to send Hello World on every tick.

While on the java-kafka-consumer side, you should see that it is receiving the Hello World message.

Voila, you have successfully created your first Kafka cluster and running the very classic Hello World on it. That’s it for a brief introduction to Strimzi! Hope it helps on your journey of learning the amazing Kafka.

--

--

Weng Kee Teh

A builder, a gamer, an explorer. Disclaimer: the views expressed here are those of the author, and do not reflect the views of his employer