我刚在Windows机器上安装了Kafka(来自Confluent Platform)。我启动了Zookeeper和Kafka并创建主题,制作和消费他们的作品。但是,一旦我删除了一个主题,Kafka就会崩溃:
PS C:\confluent-4.1.1> .\bin\windows\kafka-topics.bat -zookeeper 127.0.0.1:2181 --topic foo --create --partitions 1 --replication-factor 1
Created topic "foo".
PS C:\confluent-4.1.1> .\bin\windows\kafka-topics.bat -zookeeper 127.0.0.1:2181 --topic foo --delete
Topic foo is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
这是崩溃输出:
[2018-06-08 09:44:54,185] ERROR Error while renaming dir for foo-0 in log dir C:\confluent-4.1.1\data\kafka (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: C:\confluent-4.1.1\data\kafka\foo-0 -> C:\confluent-4.1.1\data\kafka\foo-0.cf697a92ed5246c0977bf9a279f15de8-delete
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:579)
at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
at kafka.log.Log.renameDir(Log.scala:577)
at kafka.log.LogManager.asyncDelete(LogManager.scala:828)
at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:240)
at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:235)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
at kafka.cluster.Partition.delete(Partition.scala:235)
at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:347)
at kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:377)
at kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:375)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:375)
at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:205)
at kafka.server.KafkaApis.handle(KafkaApis.scala:116)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.nio.file.AccessDeniedException: C:\confluent-4.1.1\data\kafka\foo-0 -> C:\confluent-4.1.1\data\kafka\foo-0.cf697a92ed5246c0977bf9a279f15de8-delete
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
... 23 more
[2018-06-08 09:44:54,187] INFO [ReplicaManager broker=0] Stopping serving replicas in dir C:\confluent-4.1.1\data\kafka (kafka.server.ReplicaManager)
[2018-06-08 09:44:54,192] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions (kafka.server.ReplicaFetcherManager)
[2018-06-08 09:44:54,193] INFO [ReplicaAlterLogDirsManager on broker 0] Removed fetcher for partitions (kafka.server.ReplicaAlterLogDirsManager)
[2018-06-08 09:44:54,195] INFO [ReplicaManager broker=0] Broker 0 stopped fetcher for partitions and stopped moving logs for partitions because they are in the failed log directory C:\confluent-4.1.1\data\kafka. (kafka.server.ReplicaManager)
[2018-06-08 09:44:54,195] INFO Stopping serving logs in dir C:\confluent-4.1.1\data\kafka (kafka.log.LogManager)
[2018-06-08 09:44:54,197] ERROR Shutdown broker because all log dirs in C:\confluent-4.1.1\data\kafka have failed (kafka.log.LogManager)
[2018-06-08 09:44:54,198] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions (kafka.server.ReplicaFetcherManager)
运行Zookeeper和Kafka的用户拥有C:\confluent-4.1.1\data\kafka
的完全访问权限。
我错过了什么?
答案 0 :(得分:3)
我有一个类似的问题,它只发生在Windows下,请参阅KAFKA-1194,它仍然适用于Kafka 1.1.0
唯一可用的解决方法是禁用清除程序log.cleaner.enable = false
对于Windows下的本地开发,您可以忽略此问题,因为它不适用于其他操作系统。
答案 1 :(得分:2)
删除主题后,我遇到了类似的问题。我必须去主题位置并手动删除它,它才起作用。
/tmp/kafka-logs/[yourTopicName]
我不确定是否同样适用于您,因为我也是KAFKA的新手。
答案 2 :(得分:1)
我知道我参加聚会很晚,但是请记住,即使您手动或通过某些Kafka UI删除主题,并且删除了所有kafka日志,由于与kafka同步的状态,kafka仍可能无法启动ZK。
因此,请确保通过删除ZK的日志来清理ZK状态。
请知道这些动作是不可逆的。
答案 3 :(得分:0)
可能重复的Exception during topic deletion when Kafka is hosted in Docker in Windows
如果kafka托管在Windows上,则从C:/ tmp删除Zookeeper和Kafka-logs文件夹中的所有日志。
答案 4 :(得分:0)
问题: 删除主题后,我遇到了类似的问题。 zookeeper已成功启动,但是在运行kafka时遇到了上述问题。
分析: 就我而言,我所做的是将kafka日志重定向到新文件夹位置C:\ Tools \ kafka_2.13-2.6.0 \ kafka-test-logs。我忘了创建一个文件夹kafka-test-logs。在这种情况下,它将创建自动默认文件夹,其路径名称为:Toolskafka_2.13-2.6.0kafka-test-logs。因此,即使删除了此日志文件夹,在我看来,也无法正常工作。
解决方案: 首先,我停止了动物园管理员。我创建了新文件夹kafka-test-logs,该文件夹我早些时候忘记了,然后删除了为kafka创建的默认日志,然后重新启动了zookeeper和kafka服务器。这些都对我有用。
谢谢!!干杯和快乐的编码。
答案 5 :(得分:0)
1- stop zookeeper & Kafka server,
2- then go to ‘kafka-logs’ folder , there you will see list of kafka topic folders, delete folder with topic name
3- go to ‘zookeeper-data’ folder , delete data inside that.
4- start zookeeper & kafka server again.
注意:如果您收到“The Cluster ID xxxxxxxxxx does not match stored clusterId”错误,您必须删除 kafkas 日志目录中的所有文件。