不能使用Apache Kafka的主管

时间:2017-01-23 22:15:22

标签: ubuntu apache-kafka ubuntu-16.04 supervisord

我有一台安装了Apache Kafka的Ubuntu 16.04机器。目前,我可以通过使用具有以下内容的start_kafka.sh脚本使其完美地工作:

JMX_PORT=17264 KAFKA_HEAP_OPTS="-Xms1024M -Xmx3072M" /home/kafka/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh -daemon /home/kafka/kafka_2.11-0.10.1.0/config/server.properties

现在,我想使用supervisor在失败时自动重启进程,并在重启机器后立即启动。问题是我不能让supervisor启动Kafka。

我使用supervisor安装了pip并将此配置文件放在/etc/supervisord.conf

; Supervisor config file.
;
; For more information on the config file, please see:
; http://supervisord.org/configuration.html

[unix_http_server]
file=/tmp/supervisor.sock   ; (the path to the socket file)

[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB        ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10           ; (num of main logfile rotation backups;default 10)
loglevel=info                ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false               ; (start in foreground if true;default false)
minfds=1024                  ; (min. avail startup file descriptors;default 1024)
minprocs=200                 ; (min. avail process descriptors;default 200)

; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL  for a unix socket

[program:kafka]
command=/home/kafka/kafka_2.11-0.10.1.0/start_kafka.sh ; the program (relative uses PATH, can take args)
;process_name=%(program_name)s ; process_name expr (default %(program_name)s)
startsecs=10                   ; # of secs prog must stay up to be running (def. 1)
startretries=3                ; max # of serial start failures when starting (default 3)
;autorestart=unexpected        ; when to restart if exited after running (def: unexpected)
;exitcodes=0,2                 ; 'expected' exit codes used with autorestart (default 0,2)
stopsignal=TERM               ; signal used to kill process (default TERM)
stopwaitsecs=180               ; max num secs to wait b4 SIGKILL (default 10)
stdout_logfile=NONE        ; stdout log path, NONE for none; default AUTO
;environment=A="1",B="2"       ; process environment additions (def no adds)

当我尝试启动Kafka时,会发生以下错误:

# supervisorctl start kafka
kafka: ERROR (spawn error)

主管日志(/tmp/supervisord.log)包含:

2017-01-23 22:10:24,532 INFO spawned: 'kafka' with pid 21311
2017-01-23 22:10:24,536 INFO exited: kafka (exit status 127; not expected)
2017-01-23 22:10:25,542 INFO spawned: 'kafka' with pid 21312
2017-01-23 22:10:25,559 INFO exited: kafka (exit status 127; not expected)
2017-01-23 22:10:27,562 INFO spawned: 'kafka' with pid 21313
2017-01-23 22:10:27,567 INFO exited: kafka (exit status 127; not expected)
2017-01-23 22:10:30,571 INFO spawned: 'kafka' with pid 21314
2017-01-23 22:10:30,576 INFO exited: kafka (exit status 127; not expected)
2017-01-23 22:10:31,578 INFO gave up: kafka entered FATAL state, too many start retries too quickly

必须说我已经尝试删除-daemon中的start_kafka.sh标记以与supervisor一起使用,但没有成功。

有没有人知道发生了什么?

2 个答案:

答案 0 :(得分:2)

以下主管配置文件适合我,从https://github.com/miguno/wirbelsturm通过https://github.com/miguno/puppet-kafka获取。主要区别在于它使用kafka-run-class.sh而不是kafka-server-start.sh

请注意,您需要更新各种路径,以使其与您的设置相匹配,例如:您必须将/opt/kafka/bin/kafka-run-class.sh更改为/home/kafka/kafka_2.11-0.10.1.0/bin/kafka-run-class.sh

[program:kafka-broker]
command=/opt/kafka/bin/kafka-run-class.sh kafka.Kafka /opt/kafka/config/server.properties
numprocs=1
numprocs_start=0
priority=999
autostart=true
autorestart=true
startsecs=10
startretries=999
exitcodes=0,2
stopsignal=INT
stopwaitsecs=120
stopasgroup=true
directory=/
user=kafka
redirect_stderr=false
stdout_logfile=/var/log/supervisor/kafka-broker/kafka-broker.out
stdout_logfile_maxbytes=20MB
stdout_logfile_backups=5
stderr_logfile=/var/log/supervisor/kafka-broker/kafka-broker.err
stderr_logfile_maxbytes=20MB
stderr_logfile_backups=10
environment=JMX_PORT=9999,KAFKA_GC_LOG_OPTS="-Xloggc:/var/log/kafka/daemon-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps",KAFKA_HEAP_OPTS="-Xms512M -Xmx512M -XX:NewSize=200m -XX:MaxNewSize=200m",KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false",KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true",KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/opt/kafka/config/log4j.properties",KAFKA_OPTS="-XX:CMSInitiatingOccupancyFraction=70 -XX:+PrintTenuringDistribution"

答案 1 :(得分:2)

我终于设法让主管与Kafka合作,进行了两项改动:

  • 在没有-daemon标志的情况下部署Kafka,因为主管需要非daemozined流程来管理
  • 在超级用户配置文件中明确定义Java路径

这是工作配置:

start_kafka.sh

JMX_PORT=17264 KAFKA_HEAP_OPTS="-Xms1024M -Xmx3072M" /home/kafka/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh /home/kafka/kafka_2.11-0.10.1.0/config/server.properties

supervisord.conf

[unix_http_server]
file=/var/run/supervisor.sock   ; (the path to the socket file)
chmod=0700                       ; sockef file mode (default 0700)

[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir=/var/log/supervisor            ; ('AUTO' child log dir, default $TEMP)

; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL  for a unix socket

; The [include] section can just contain the "files" setting.  This
; setting can list multiple files (separated by whitespace or
; newlines).  It can also contain wildcards.  The filenames are
; interpreted as relative to this file.  Included files *cannot*
; include files themselves.

[include]
files = /etc/supervisor/conf.d/*.conf

[program:kafka]
command=/home/kafka/kafka_2.11-0.10.1.0/start_kafka.sh
directory=/home/kafka/kafka_2.11-0.10.1.0
user=root
autostart=true
autorestart=true
stdout_logfile=/var/log/kafka/stdout.log
stderr_logfile=/var/log/kafka/stderr.log
environment = JAVA_HOME=/usr/lib/jvm/java-8-oracle