Kafka Stream启动问题 - org.apache.kafka.streams.errors.LockException

时间:2017-09-13 15:29:19

标签: apache-kafka kafka-consumer-api apache-kafka-streams

我有一个Kafka Streams应用程序版本 - 0.11,它从几个主题获取数据并加入数据并将其放入另一个主题。

Kafka配置:

5 kafka brokers - version 0.11
Kafka Topics - 15 partitions and 3 replication factor.

每小时消耗/生产的记录数量达数百万。每当我拿走任何一个kafka经纪人时,它都会抛出异常:

org.apache.kafka.streams.errors.LockException: task [4_10] Failed to lock the state directory for task 4_10
    at org.apache.kafka.streams.processor.internals.ProcessorStateManager.<init>(ProcessorStateManager.java:99)
    at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:80)
    at org.apache.kafka.streams.processor.internals.StandbyTask.<init>(StandbyTask.java:62)
    at org.apache.kafka.streams.processor.internals.StreamThread.createStandbyTask(StreamThread.java:1325)
    at org.apache.kafka.streams.processor.internals.StreamThread.access$2400(StreamThread.java:73)
    at org.apache.kafka.streams.processor.internals.StreamThread$StandbyTaskCreator.createTask(StreamThread.java:313)
    at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:254)
    at org.apache.kafka.streams.processor.internals.StreamThread.addStandbyTasks(StreamThread.java:1366)
    at org.apache.kafka.streams.processor.internals.StreamThread.access$1200(StreamThread.java:73)
    at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:185)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:265)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:363)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:310)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:297)
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
    at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:582)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:553)
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527)

我已经阅读了几个jira问题,清理流可能有助于解决问题。但是每次我们启动Kafka Stream应用程序时清理流都是正确的解决方案还是补丁?另外,stream cleanUp会延迟应用程序启动吗?

注意:每次启动Kafka Streams应用程序时,是否需要在调用streams.start()之前调用streams.cleanUp()

1 个答案:

答案 0 :(得分:1)

看到org.apache.kafka.streams.errors.LockException: task [4_10] Failed to lock the state directory for task 4_10实际上是预期的,应该自行解决。该线程将退出,以便等待另一个线程释放锁并稍后重试。因此,如果在第二个线程释放锁之前重试发生,您甚至可能会多次看到此WARN消息是日志。

但是,最终锁定应该由第二个线程释放,第一个线程将能够获得锁定。之后,Streams应该向前迈进。注意,它是一条WARN消息,而不是错误。