kafka embedded:java.io.FileNotFoundException:/tmp/kafka-7785736914220873149/replication-offset-checkpoint.tmp

时间:2018-02-27 10:30:51

标签: spring-boot cassandra spring-kafka

我在集成测试中使用kafkaEmbedded,我得到FileNotFoundException:

java.io.FileNotFoundException: /tmp/kafka-7785736914220873149/replication-offset-checkpoint.tmp 
at java.io.FileOutputStream.open0(Native Method) ~[na:1.8.0_141]
at java.io.FileOutputStream.open(FileOutputStream.java:270) ~[na:1.8.0_141]
at java.io.FileOutputStream.<init>(FileOutputStream.java:213) ~[na:1.8.0_141]
at java.io.FileOutputStream.<init>(FileOutputStream.java:162) ~[na:1.8.0_141]
at kafka.server.checkpoints.CheckpointFile.write(CheckpointFile.scala:43) ~[kafka_2.11-0.11.0.0.jar:na]
at kafka.server.checkpoints.OffsetCheckpointFile.write(OffsetCheckpointFile.scala:58) ~[kafka_2.11-0.11.0.0.jar:na]
at kafka.server.ReplicaManager$$anonfun$checkpointHighWatermarks$2.apply(ReplicaManager.scala:1118) [kafka_2.11-0.11.0.0.jar:na]
at kafka.server.ReplicaManager$$anonfun$checkpointHighWatermarks$2.apply(ReplicaManager.scala:1115) [kafka_2.11-0.11.0.0.jar:na]
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) [scala-library-2.11.11.jar:na]
at scala.collection.immutable.Map$Map1.foreach(Map.scala:116) [scala-library-2.11.11.jar:na]
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) [scala-library-2.11.11.jar:na]
at kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:1115) [kafka_2.11-0.11.0.0.jar:na]
at kafka.server.ReplicaManager$$anonfun$1.apply$mcV$sp(ReplicaManager.scala:211) [kafka_2.11-0.11.0.0.jar:na]
at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) [kafka_2.11-0.11.0.0.jar:na]
at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57) [kafka_2.11-0.11.0.0.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_141]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_141]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]

我的测试成功通过但我在构建结束时遇到此错误

经过数小时的研究,我发现了这个:

  • kafka TestUtils.tempDirectory方法用于为嵌入式kafka代理创建临时目录。它还注册了shutdown hook,它会在JVM退出时删除该目录。
  • 当单元测试完成执行时,它调用System.exit,然后执行所有已注册的关闭挂钩

如果kafka代理在单元测试结束时运行,它将尝试在dir中写入/读取数据,该目录将被删除并产生不同的FileNotFound异常。

我的配置类:

@Configuration
public class KafkaEmbeddedConfiguration {

private final KafkaEmbedded kafkaEmbedded;

public KafkaEmbeddedListenerConfigurationIT() throws Exception {
    kafkaEmbedded = new KafkaEmbedded(1, true, "topic1");
    kafkaEmbedded.before();
}

@Bean
public KafkaTemplate<String, Message> sender(ProtobufSerializer protobufSerializer,
        KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry) throws Exception {
    KafkaTemplate<String, Message> sender = KafkaTestUtils.newTemplate(kafkaEmbedded, new StringSerializer(),
            protobufSerializer);
for (MessageListenerContainer listenerContainer : 
registry.getListenerContainers()) {
        ContainerTestUtils.waitForAssignment(listenerContainer, 
kafkaEmbedded.getPartitionsPerTopic());
    }        

    return sender;
}

测试类:

@RunWith(SpringRunner.class)
public class DeviceEnergyKafkaListenerIT {
 ...
@Autowired
private KafkaTemplate<String, Message> sender;

@Test
public void test (){
    ...
    sender.send(topic, msg);
    sender.flush();
}

任何想法如何解决这个问题?

3 个答案:

答案 0 :(得分:4)

使用@ClassRule代理,添加@AfterClass方法...

@AfterClass
public static void tearDown() {
    embeddedKafka.getKafkaServers().forEach(b -> b.shutdown());
    embeddedKafka.getKafkaServers().forEach(b -> b.awaitShutdown());
}

对于@Rule或bean,请使用@After方法。

答案 1 :(得分:1)

final KafkaServer server = 
embeddedKafka.getKafkaServers().stream().findFirst().orElse(null);  
if(server != null) {
  server.replicaManager().shutdown(false);
final Field replicaManagerField = server.getClass().getDeclaredField("replicaManager");
if(replicaManagerField != null) {
    replicaManagerField.setAccessible(true);
    replicaManagerField.set(server, null);
 }
}
embeddedKafka.after();

有关更详细的讨论,您可以参考此主题 Embedded kafka issue with multiple tests using the same context

答案 2 :(得分:0)

mhyeon-lee 提供的以下解决方案对我有用:

import org.apache.kafka.common.utils.Exit

class SomeTest {
    static {
        Exit.setHaltProcedure((statusCode, message) -> {
            if (statusCode != 1) {
                Runtime.getRuntime().halt(statusCode);
            }
        });
    }

    @Test
    void test1()  {
    }

    @Test
    void test2() {
    }
}

当JVM shutdown Hook运行时,kafka日志文件将被删除并 其他关闭挂钩访问kafka日志时调用Exit.halt(1) 同时归档文件。

因为这里叫暂停,状态为1,所以我只能防御1。 https://github.com/a0x8o/kafka/blob/master/core/src/main/scala/kafka/log/LogManager.scala#L193

如果遇到测试失败而又出现其他情况的情况 状态值,您可以添加防御代码。

可能会出现错误日志,但测试不会失败,因为该命令 不会传播到Runtime.halt。

参考:

https://github.com/spring-projects/spring-kafka/issues/194#issuecomment-612875646 https://github.com/spring-projects/spring-kafka/issues/194#issuecomment-613548108