我有一个kafka连接jar,需要作为docker容器运行。我需要捕获容器中日志文件上的所有连接日志(最好是在目录/文件 - / etc / kafka / kafka-connect-logs),以后可以将其推送到localhost(运行docker引擎)码头工人的数量。当我将connect-log4j.properties
更改为附加到日志文件中时,我发现没有创建日志文件。如果我在没有docker的情况下尝试相同的操作并通过更改connect-log4j.properties
将日志写入日志文件来在本地Linux VM上运行kafka connect,它可以完美地工作但不是来自docker。任何建议都会非常有用。
Docker File
FROM confluent/platform
COPY Test.jar /usr/local/bin/
COPY kafka-connect-docker.sh /usr/local/bin/
COPY connect-distributed.properties /usr/local/bin/
COPY connect-log4j.properties /etc/kafka/connect-log4j.properties
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-yq", "curl"]
RUN ["chown", "-R", "confluent:confluent", "/usr/local/bin/kafka-connect-docker.sh", "/usr/local/bin/connect-distributed.properties", "/usr/local/bin/Test.jar"]
RUN ["chmod", "+x", "/usr/local/bin/kafka-connect-docker.sh", "/usr/local/bin/connect-distributed.properties", "/usr/local/bin/Test.jar"]
RUN ["chown", "-R", "confluent:confluent", "/etc/kafka/connect-log4j.properties"]
RUN ["chmod", "777", "/usr/local/bin/kafka-connect-docker.sh", "/etc/kafka/connect-log4j.properties"]
EXPOSE 8083
CMD [ "/usr/local/bin/kafka-connect-docker.sh" ]
connect-log4j.properties
# Root logger option
log4j.rootLogger = INFO, FILE
# Direct log messages to stdout
log4j.appender.FILE=org.apache.log4j.FileAppender
log4j.appender.FILE.File=/etc/kafka/log.out
# Define the layout for file appender
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.conversionPattern=%m%
log4j.logger.org.apache.zookeeper=ERROR
log4j.logger.org.I0Itec.zkclient=ERROR
kafka-connect-docker.sh
#!/bin/bash
export CLASSPATH=/usr/local/bin/Test.jar
exec /usr/bin/connect-distributed /usr/local/bin/connect-distributed.properties
当我使用默认的connect-log4j.properties
(将日志附加到控制台)时,它工作正常,但无法在docker中创建日志文件。此外,没有docker的相同进程在本地VM中工作正常(创建日志文件)。
答案 0 :(得分:1)
讨论结果:
立即在dockerfile中声明卷,并配置日志记录配置以将日志输出直接放入卷。在docker run(或者你启动容器)上安装音量,你应该没问题。