There is plenty of documentation online on how to use a load balancer in front of a kafka cluster to either SSL terminate or provide access to a private kubernetes setup, etc... This post has nothing to do with that.

The use case is as follows: You have clients who cannot be bothered to update their bootstrap.server configuration so you want to provide them a single url that will never change. The clients still have connectivity to the entire kafka cluster. So how do we effectively SSL passthrough for the initial connection that fetches the metadata for the clients to then connect as normal? HAProxy.

Imagine we have a three node kafka cluster running locally on ports 9093, 9095, and 9097. We are going to setup HAProxy on port 9999 and have it roundrobin through these kafka servers for the metadata bootstrap

# To setup logging to /var/log/haproxy.log
# 1. Add to SYSLOGD_OPTIONS="-r" /etc/sysconfig/syslog
# 2. Add local2.* /var/log/haproxy.log to /etc/rsyslog.conf

# add to haproxy.cfg
listen kafka
  bind 127.0.0.1:9999
  mode tcp
  balance roundrobin
  server  kafka1 127.0.0.1:9093 check
  server  kafka2 127.0.0.1:9095 check
  server  kafka3 127.0.0.1:9097 check

Originally we would use a kafka client like this:

kafka-console-consumer --bootstrap-server localhost:9093,localhost:9095,localhost:9097 --consumer.config config/client.properties --topic lb.topic.1

Now we can do this:

kafka-console-consumer --bootstrap-server localhost:9999 --consumer.config config/client.properties --topic lb.topic.1

The SSL items set up will pass through, metadata will be returned, and the clients will operate as they normally would with full connectivity to the cluster.