Docker Kafka容器消费者不会消耗数据

时间:2018-02-25 10:02:07

标签: java docker apache-kafka

我是Docker和Apache Kafka的新手。我想要做的是在java中创建一个使用者和生产者类。我设置了spotify / kafka,它是Docker的kafka容器。但出了点问题。

我找不到任何生产者消费者的例子(如果你有一个请分享)对于一个码头工人卡夫卡容器,所以我只是试着这样做是普通的kafka(我的意思是不是一个码头工人容器,我猜有用法没有区别)。我尝试了这段代码here(我也试着联系这个人问但是无法实现,所以我在这里寻求帮助):但是当我向生产者终端写东西时,生产者终端没有任何东西出现。我的操作系统是Ubuntu Xenial 16.04。这是我做的:

我通过键入以下内容启动了docker kafka容器:

docker run -it spotify/kafka

在输出结束时,我收到了这条消息,所以我猜错了:

2018-02-25 09:27:16,911 INFO success: kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

消费者类:

public class Consumer {
private static Scanner in;

public static void main(String[] argv)throws Exception{
    if (argv.length != 2) {
        System.err.printf("Usage: %s <topicName> <groupId>\n",
                Consumer.class.getSimpleName());
        System.exit(-1);
    }
    in = new Scanner(System.in);
    String topicName = argv[0];
    String groupId = argv[1];

    ConsumerThread consumerRunnable = new ConsumerThread(topicName,groupId);
    consumerRunnable.start();

    String line = "";
    while (!line.equals("exit")) {

        line = in.next();
    }
    consumerRunnable.getKafkaConsumer().wakeup();
    System.out.println("Stopping consumer .....");
    consumerRunnable.join();
}

private static class ConsumerThread extends Thread{
    private String topicName;
    private String groupId;
    private KafkaConsumer<String,String> kafkaConsumer;

    public ConsumerThread(String topicName, String groupId){
        this.topicName = topicName;
        this.groupId = groupId;
    }
    public void run() {
        Properties configProperties = new Properties();
        configProperties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        configProperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArrayDeserializer");
        configProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        configProperties.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        configProperties.put(ConsumerConfig.CLIENT_ID_CONFIG, "simple");

        //Figure out where to start processing messages from
        kafkaConsumer = new KafkaConsumer<String, String>(configProperties);
        kafkaConsumer.subscribe(Arrays.asList(topicName));
        //Start processing messages
        try {
            while (1) {
                ConsumerRecords<String, String> records = kafkaConsumer.poll(100);

        System.out.println(records.toString() +"geldi");
                for (ConsumerRecord<String, String> record : records)
                    System.out.println(record.value());
            }
        }catch(WakeupException ex){
            System.out.println("Exception caught " + ex.getMessage());
        }finally{
            kafkaConsumer.close();
            System.out.println("After closing KafkaConsumer");
        }
    }
    public KafkaConsumer<String,String> getKafkaConsumer(){
       return this.kafkaConsumer;
    }
}
}

制片人类:

public class Producer {
private static Scanner in;
public static void main(String[] argv)throws Exception {
    if (argv.length != 1) {
        System.err.println("Please specify 1 parameters ");
        System.exit(-1);
    }
    String topicName = argv[0];
    in = new Scanner(System.in);
    System.out.println("Enter message(type exit to quit)");

    //Configure the Producer
    Properties configProperties = new Properties();
    configProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"localhost:9092");
    configProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.ByteArraySerializer");
    configProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");

    org.apache.kafka.clients.producer.Producer producer = new KafkaProducer(configProperties);
    String line = in.nextLine();
    while(!line.equals("exit")) {
        //TODO: Make sure to use the ProducerRecord constructor that does not take parition Id
        ProducerRecord<String, String> rec = new ProducerRecord<String, String>(topicName,line);
        producer.send(rec);
        line = in.nextLine();
    }
    in.close();
    producer.close();
}
}

在不同的终端中运行两个类并输入:

mvn clean compile assembly:single
java -cp (fat jar path) .../Consumer test(topic name) group1
java -cp (fat jar path) .../Producer test(topic name)

当我在生产者终端输入内容时,消费者中没有任何内容。请注意,我没有安装zookeeper,因为s potify/kafka包括zookeeper。在执行这些步骤之前,我没有创建任何主题或组。这是我唯一做的事情。我找不到怎么做。我该如何解决这个问题?

编辑:我添加了消费者和生产者配置值,任何人都可以知道任何错误吗?

消费者配置:

    metric.reporters = []
metadata.max.age.ms = 300000
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
group.id = gr1
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [localhost:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
enable.auto.commit = true
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
session.timeout.ms = 30000
metrics.num.samples = 2
client.id = simple
ssl.endpoint.identification.algorithm = null
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
ssl.protocol = TLS
check.crcs = true
request.timeout.ms = 40000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
fetch.min.bytes = 1024
send.buffer.bytes = 131072
auto.offset.reset = latest

2018-02-25 16:23:37 INFO  AppInfoParser:82 - Kafka version : 0.9.0.0
2018-02-25 16:23:37 INFO  AppInfoParser:83 - Kafka commitId :     fc7243c2af4b2b4a

制片人配置:

compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id = 
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 30000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = 1
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 0

2018-02-25 16:24:16 INFO  AppInfoParser:82 - Kafka version : 0.9.0.0
2018-02-25 16:24:16 INFO  AppInfoParser:83 - Kafka commitId : fc7243c2af4b2b4a

0 个答案:

没有答案