在 Windows子系统中的 kafka / zookeeper v2.4.1 (二进制 kafka_2.13-2.4.1.tgz )的单个群集/实例安装中,使用Ubuntu 18.04的Linux(WSL),kafka代理在清理日志文件期间意外关闭,并显示以下错误消息:
key6
无法清除的目录 __ consumer_offsets-11 存在:
我尝试了以下方法:
该错误每天发生很多次,而与日志保留配置无关。服务器配置属性( server.properties )是默认属性:
ERROR Failed to clean up log for __consumer_offsets-11 in dir /tmp/kafka-logs due to IOException (kafka.server.LogDirFailureChannel)
java.io.IOException: Invalid argument
at java.io.RandomAccessFile.setLength(Native Method)
at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:188)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.scala:17)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.log.AbstractIndex.resize(AbstractIndex.scala:174)
at kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:240)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.scala:17)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:240)
at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:508)
at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:595)
at kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:530)
at kafka.log.Cleaner.$anonfun$doClean$6$adapted(LogCleaner.scala:529)
at scala.collection.immutable.List.foreach(List.scala:305)
at kafka.log.Cleaner.doClean(LogCleaner.scala:529)
at kafka.log.Cleaner.clean(LogCleaner.scala:503)
at kafka.log.LogCleaner$CleanerThread.cleanLog(LogCleaner.scala:372)
at kafka.log.LogCleaner$CleanerThread.cleanFilthiestLog(LogCleaner.scala:345)
at kafka.log.LogCleaner$CleanerThread.tryCleanFilthiestLog(LogCleaner.scala:325)
at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:314)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
答案 0 :(得分:0)
问题似乎出在您的log.dirs
,当前设置为tmp/kafka-logs
。当您的机器关闭时,这可能会引起一些麻烦,因为tmp/
的内容将被清除。
尝试将路径更改为永久位置,而不是tmp/