每18到20个小时之后,由于Kafka
错误,log
服务将继续出错。
我浏览了很多帖子,所有帖子都提到,要么在路径中包含double backslash
即\\
,要么删除先前的日志并再次启动Kafka
服务。
虽然确实会再次启动Kafka
,但它一次又一次地遇到相同的问题。
我们如何永久解决此问题,使其production
就绪?
还有,有什么方法可以使我拥有一个后备机制,如果kafka
由于某种原因而失败,它将自动重启?
下面是我的server.properties
片段。
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=c:\\kafka\\kafka-logs-cos10
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=1
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000