'Kafka broker fails to start in Docker
I have a pretty standard compose file. When I run it first time all containers run good. When I run docker-compose -f kafka-compose.yml down and run it again I get the following error:
broker | [2021-10-06 09:57:13,398] ERROR Error while creating ephemeral at /brokers/ids/1, node already exists and owner '72075955082625025' does not match current session '72075962265632769' (kafka.zk.KafkaZkClient$CheckedEphemeral)
I didn't find server.properties in broker container. May it be the reason? What must be changed?
As I read that may be caused by the fact that not all settings are persisted in mounted folders hence it reloads on start. But which one?
Here's my docker-compose file:
version: '3.3'
networks:
default-dev-network:
external: true
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.2.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
volumes:
- $PWD/kafka-data/zookeeper/var-lib/data:/var/lib/zookeeper/data
- $PWD/kafka-data/zookeeper/var-lib/log:/var/lib/zookeeper/log
- $PWD/kafka-data/zookeeper/etc-kafka:/etc/kafka
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
networks:
- default-dev-network
broker:
image: confluentinc/cp-kafka:6.2.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "29092:29092"
- "9092:9092"
- "9101:9101"
volumes:
- $PWD/kafka-data/kafka/data:/var/lib/kafka/data
- $PWD/kafka-data/kafka-home:/etc/kafka
# entrypoint: sh -c 'sleep 30 && /etc/confluent/docker/run'
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_LOG4J_LOGGERS: "org.apache.zookeeper=ERROR,\
org.apache.kafka=ERROR,\
kafka=ERROR,\
kafka.cluster=ERROR,\
kafka.controller=ERROR,\
kafka.coordinator=ERROR,\
kafka.log=ERROR,\
kafka.server=ERROR,\
kafka.zookeeper=ERROR,\
state.change.logger=ERROR"
# KAFKA_LOG4J_LOGGERS: "kafka.controller=ERROR, kafka.coordinator=ERROR, state.change.logger=ERROR"
KAFKA_LOG4J_ROOT_LOGLEVEL: ERROR
KAFKA_TOOLS_LOG4J_LOGLEVEL: ERROR
networks:
- default-dev-network
schema-registry:
image: confluentinc/cp-schema-registry:6.2.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
networks:
- default-dev-network
control-center:
image: confluentinc/cp-enterprise-control-center:6.2.0
hostname: control-center
container_name: control-center
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
CONTROL_CENTER_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
networks:
- default-dev-network
Solution 1:[1]
Yes, this error is common if you aren't deleting your volume data across container restarts
didn't find server.properties in broker container
It's there...
...
Status: Downloaded newer image for confluentinc/cp-kafka:6.2.0
sh-4.4$ ls /etc/kafka/
connect-console-sink.properties connect-mirror-maker.properties secrets
connect-console-source.properties connect-standalone.properties server.properties
connect-distributed.properties consumer.properties tools-log4j.properties
connect-file-sink.properties kraft trogdor.conf
connect-file-source.properties log4j.properties zookeeper.properties
connect-log4j.properties producer.properties
sh-4.4$ ls /etc/kafka/server.properties
/etc/kafka/server.properties
not all settings are persisted in mounted folders hence it reloads on start. But which one?
They are, but the error you're getting is from the Zookeeper mount, not Kafka's volume data
Solution 2:[2]
identify which volumes to remove
docker volume ls
remove them
docker volume rm beepboop_zookeeper_data
docker volume rm beepboop_zookeeper_txns
restart kafka
docker-compose restart kafka
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | OneCricketeer |
| Solution 2 | jmunsch |
