Kafka-consumer占用了大量内存,最终因内存不足而失败

时间:2018-04-13 01:04:32

标签: kafka-consumer-api

Kafka-consumer(kafka-clients-0.8.2.1)占用了大量内存,最终因内存不足而失败。 :

ERROR [2018-04-09 21:31:45,946] kafka.network.BoundedByteBufferReceive: OOME with size 10485962
! java.lang.OutOfMemoryError: Java heap space
! at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
! at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
! at kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
! at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
! at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
! at kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
! at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
! at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
! at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
! at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
! at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
! at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
! at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
! at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
! at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
! at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
! at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
! at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
! at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
! at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
! at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
INFO  [2018-04-09 21:31:45,946] kafka.consumer.SimpleConsumer: Reconnect due to socket error: java.lang.OutOfMemoryError: Java heap space
ERROR [2018-04-09 21:31:46,118] kafka.network.BoundedByteBufferReceive: OOME with size 10485962
! java.lang.OutOfMemoryError: Java heap space
! at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
! at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
! at kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
! at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
! at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
! at kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
! at kafka.network.BlockingChannel.receive(BlockingChannel.scala:111)
! at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:71)
! at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
! at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:112)
! at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
! at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:112)
! at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
! at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:111)
! at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
! at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:111)
! at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
! at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:110)
! at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:94)
! at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:86)
! at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)

kafka使用者配置,将fetch.message.max.bytes设置为32KB(与1MB默认值相比)会延迟错误,但不会阻止错误发生:

zookeeper.connect: "{{ zookeeper }}"
group.id: my-service
auto.offset.reset: smallest
rebalance.backoff.ms: 2500
rebalance.max.retries: 8
fetch.message.max.bytes: 32768
zookeeper.session.timeout.ms: 22000 

我的消费者代码:

public void start(int numThreads, Function<Message, ?> function) {
    Collection<Stream<Message>> streams = streams(numThreads);
    executor = Executors.newFixedThreadPool(streams.size());
    streams.stream().forEach(s -> executor.submit(() -> s.forEach(m -> {
        try {
            function.apply(m);
        } catch (Throwable t) {
            LOGGER.debug("Failed to apply the message."
                + " topic: " + m.getTopic()
                + " partition: " + m.getPartition()
                + " offset: " + m.getOffset()
                + " payload: " + m.getPayloadAsString(), t);
        }
    })));
}

streams功能:

protected Collection<Stream<Message>> streams(int numStreams) {
    if (consumer != null) { 
        throw new IllegalStateException("Consumer still running"); 
    }
    consumer = kafka.consumer.Consumer.createJavaConsumerConnector(consumerConfig);
    Map<String, Integer> topicCountMap = ImmutableMap.of(topic, new Integer(numStreams));
    Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
    List<KafkaStream<byte[], byte[]>> kafkaStreams = consumerMap.get(topic);
    return kafkaStreams.stream()
        .map(ks -> StreamSupport.stream(new MessageSpliterator(topic, ks), false)) // turn kafka stream to java stream
        .collect(Collectors.toList());
}

我的MessageSpliterator类,它扩展了AbstractSpliterator:

static class MessageSpliterator extends AbstractSpliterator<Message> {
    private final String topic;
    private final ConsumerIterator<byte[], byte[]> iterator;

    protected MessageSpliterator(String topic, KafkaStream<byte[], byte[]> kafkaStream) {
        super(Long.MAX_VALUE, IMMUTABLE | ORDERED | NONNULL);
        this.topic = topic;
        this.iterator = kafkaStream.iterator();
    }

    @Override
    public boolean tryAdvance(Consumer<? super Message> action) {
        if (!iterator.hasNext()) {
            return false;
        }
        MessageAndMetadata<byte[], byte[]> mnm = iterator.next();
        Message message = new Message(topic, mnm.partition(), mnm.offset(), mnm.message());
        action.accept(message);
        return true;
    }
}

关于我在这里做错了什么的指示?

0 个答案:

没有答案