停止容器时Apache Kafka错误。 OOME

时间:2017-03-20 13:43:11

标签: java apache-kafka

我在使用maven运行Spring集成测试时得到了OOME。 Surefire插件获得了大量内存,这应该不是问题。但我仍然像这样得到OOME:

[15:15:26][Step 1/3] 09:15:26.389 [main] DEBUG org.springframework.context.support.DefaultLifecycleProcessor - Asking bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry' of type [class org.springframework.kafka.config.KafkaListenerEndpointRegistry] to stop
[15:15:26][Step 1/3] 09:15:26.392 [main] ERROR org.springframework.kafka.listener.KafkaMessageListenerContainer - Error while stopping the container: 
[15:15:26][Step 1/3] java.lang.OutOfMemoryError: Java heap space
[15:15:26][Step 1/3]    at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
[15:15:26][Step 1/3]    at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
[15:15:26][Step 1/3]    at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
[15:15:26][Step 1/3]    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
[15:15:26][Step 1/3]    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153)
[15:15:26][Step 1/3]    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134)
[15:15:26][Step 1/3]    at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
[15:15:26][Step 1/3]    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
[15:15:26][Step 1/3]    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
[15:15:26][Step 1/3]    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
[15:15:26][Step 1/3]    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
[15:15:26][Step 1/3]    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
[15:15:26][Step 1/3]    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:184)
[15:15:26][Step 1/3]    at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:886)
[15:15:26][Step 1/3]    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
[15:15:26][Step 1/3]    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:412)
[15:15:26][Step 1/3]    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[15:15:26][Step 1/3]    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[15:15:26][Step 1/3]    at java.lang.Thread.run(Thread.java:745)

我试过没有运气的@DirtyContext 我该如何解决这个问题?可能是根本原因?

1 个答案:

答案 0 :(得分:0)

删除使用嵌入式Kafka进行测试的例外

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka-test</artifactId>
    <scope>test</scope>
</dependency>
@Component
public class EmbeddedKafkaComponent extends KafkaEmbedded {

    @Autowired
    public EmbeddedKafkaComponent(@Value("${spring.kafka.topic}") String topic) {
        super(1, true, topic);
    }

    @PostConstruct
    public void postConstruct() throws Exception {
        before();
    }

    @PreDestroy
    public void preDestroy() {
        after();
    }
}
@Configuration
@Profile("test")
public class ConsumerAppTestConfig {

    @Autowired
    private EmbeddedKafkaComponent embeddedKafka;

    @Bean
    public Map<String, Object> consumerConfig() {
        return KafkaTestUtils.consumerProps(groupId, "true", embeddedKafka);
    }