无法发布到Kafka

时间:2018-12-18 01:23:42

标签: java docker apache-kafka docker-compose

我有以下卡夫卡制片人:

package dathanb;

import org.apache.kafka.clients.producer.*;

import java.util.Properties;

public class Producer {

    public static void main(String[] args){
        Properties properties = new Properties();
        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        properties.put(ProducerConfig.ACKS_CONFIG, "all");
        properties.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        properties.put(ProducerConfig.LINGER_MS_CONFIG, 1000);
        properties.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, 5000);

        try (KafkaProducer<String, String> kafkaProducer = new KafkaProducer<>(properties)) {
            System.out.println(kafkaProducer.partitionsFor("kafka-test"));
            for (int i = 0; i < 1000; i++) {
                System.out.println(i);
                var metadataFuture = kafkaProducer.send(new ProducerRecord<>("kafka-test", 0, null, "test message - " + i), callback());
                System.out.println(metadataFuture.get().partition());
                Thread.sleep(1000);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    private static Callback callback() {
        return (metadata, exception) -> {
            System.out.println(metadata);
            System.out.println(exception);
        };
    }
}

第20行System.out.println(kafkaProducer.partitionsFor("kafka-test"));打印出看起来像正确的分区配置:[Partition(topic = kafka-test, partition = 0, leader = 0, replicas = [0], isr = [0], offlineReplicas = [])],但是第24行从不打印任何内容,也从未调用callback -应用程序似乎无限期挂起。我什至没有超时,这是我期望的。

我正在Scala 2.12(Dockerfile)上运行Kafka 2.1.0,并且正在使用kafka-clients 2.1.0库。我正在使用Docker Compose运行docker化的Kafka。这是我的docker-compose.yml:

version: "2"
services:
  kafkaserver:
    image: "kafka"
    container_name: kafka
    hostname: kafkaserver
    networks:
      - kafkanet
    ports:
      - "2181:2181"
      - "9092:9092"
    environment:
      ADVERTISED_HOST: "kafkaserver"
      ADVERTISED_PORT: "9092"
networks:
  kafkanet:
    driver: bridge

存储库为hosted here

什么会导致这种行为?

编辑:

添加log4j和slf4j-log4j12网桥后,现在一次又一次地在我的日志中看到以下错误:

    [2018-12-17 17:39:44,885] ERROR [Producer clientId=producer-1] Uncaught error in kafka producer I/O thread:  (org.apache.kafka.clients.producer.internals.Sender)
    java.lang.IllegalStateException: No entry found for connection 0
        at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
        at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
        at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:921)
        at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287)
        at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:335)
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:308)
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:233)
        at java.base/java.lang.Thread.run(Thread.java:834)

0 个答案:

没有答案