添加Kafka服务失败

时间:2018-05-21 07:39:21

标签: apache-kafka cloudera

我们有Cloudera Express 5.11.0群集,我正在尝试在cloudera管理器中添加Kafka 3.0作为服务但是我收到错误,它无法在所有节点上启动代理但我没有看到任何错误。我下载了percel并分发并成功激活它。

我有几个问题:

1)我应该在ZooKeeper Root中设置什么值?这是我应该决定的还是取决于动物园管理员的安装?我看到最常见的是/ kafka所以我把它设置为/ kafka。

2)我们的zookeeper独立运行并收到有关最大请求延迟的警报,可能已连接?

3)在将kafka添加为服务的第4步中,它在节点中启动代理时失败,我不确定错误是什么。我看到了一些关于OutOfMemory的消息,但我不确定它的检查或错误。

我将添加我找到的日志的最后几行:

stdout:

 AUTHENTICATE_ZOOKEEPER_CONNECTION: true
 SUPER_USERS: kafka
 Kafka version found: 0.11.0-kafka3.0.0
 Sentry version found: 1.5.1-cdh5.11.0
 ZK_PRINCIPAL_NAME: zookeeper
 Final Zookeeper Quorum is VMClouderaMasterDev01:2181/kafka
 security.inter.broker.protocol inferred as PLAINTEXT
 LISTENERS=listeners=PLAINTEXT://VMClouderaWorkerDev03:9092,
 java.lang.OutOfMemoryError: Java heap space
 Dumping heap to /tmp/kafka_kafka-KAFKA_BROKER-     933a1dc0c29ca08ffe475da27d5b13d4_pid113208.hprof ...
 Heap dump file created [12122526 bytes in 0.086 secs]
 #
 # java.lang.OutOfMemoryError: Java heap space
 # -XX:OnOutOfMemoryError="/usr/lib64/cmf/service/common/killparent.sh"
 #   Executing /bin/sh -c "/usr/lib64/cmf/service/common/killparent.sh"...

stderr的:

+ export 'KAFKA_JVM_PERFORMANCE_OPTS=-XX:+HeapDumpOnOutOfMemoryError -                
XX:HeapDumpPath=/tmp/kafka_kafka-KAFKA_BROKER-    
933a1dc0c29ca08ffe475da27d5b13d4_pid113208.hprof - 
XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -server - 
XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled - 
XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true'
+ KAFKA_JVM_PERFORMANCE_OPTS='-XX:+HeapDumpOnOutOfMemoryError - 
XX:HeapDumpPath=/tmp/kafka_kafka-KAFKA_BROKER- 
933a1dc0c29ca08ffe475da27d5b13d4_pid113208.hprof - 
XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -server - 
XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled - 
XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true'
+ [[ false == \t\r\u\e ]]
+ exec /opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/lib/kafka/bin/kafka- 
server-start.sh /var/run/cloudera-scm-agent/process/1177-kafka- 
KAFKA_BROKER/kafka.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0- 
1.3.0.0.p0.40/lib/kafka/libs/slf4j-log4j12- 
1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0- 
1.3.0.0.p0.40/lib/kafka/libs/slf4j-log4j12- 
1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
+ grep -q OnOutOfMemoryError /proc/113208/cmdline
+ RET=0
+ '[' 0 -eq 0 ']'
+ TARGET=113208
++ date
+ echo Thu May 17 10:36:08 CDT 2018
+ kill -9 113208

/ var / log / kafka / * .log:

50.1.22:2181, initiating session
2018-05-17 10:36:08,028 INFO org.apache.zookeeper.ClientCnxn: Session     establishment complete on server VMClouderaMasterDev01/10.150.1.22:2181,     sessionid = 0x1626c7087e729cb, negotiated timeout = 6000
2018-05-17 10:36:08,028 INFO org.I0Itec.zkclient.ZkClient: zookeeper state     changed (SyncConnected)
2018-05-17 10:36:08,183 INFO kafka.server.KafkaServer: Cluster ID =     cM_4kCm6TZWxttCAXDo4GQ
2018-05-17 10:36:08,185 WARN kafka.server.BrokerMetadataCheckpoint: No     meta.properties file under dir /var/local/kafka/data/meta.properties
2018-05-17 10:36:08,222 INFO     kafka.server.ClientQuotaManager$ThrottledRequestReaper: [ThrottledRequestReaper-    Fetch]: Starting
2018-05-17 10:36:08,224 INFO     kafka.server.ClientQuotaManager$ThrottledRequestReaper: [ThrottledRequestReaper-    Produce]: Starting
2018-05-17 10:36:08,226 INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper: [ThrottledRequestReaper-Request]: Starting
2018-05-17 10:36:08,279 INFO kafka.log.LogManager: Loading logs.
2018-05-17 10:36:08,287 INFO kafka.log.LogManager: Logs loading complete in 8 ms.

1 个答案:

答案 0 :(得分:1)

在我的情况下,解决方案是将java_heap_broker大小增加到1G