Apache Kafka不会从api中消耗掉

时间:2018-05-25 16:15:31

标签: java scala apache-kafka kafka-producer-api

kafka-console-producer.sh和kafka-console-consumer.sh的控制台命令运行正常但是当我尝试使用api生成或使用时,我无法做到!有人能告诉我我的scala代码是否有问题吗?

import java.util.Properties

import org.apache.kafka.clients.producer.{KafkaProducer, ProducerRecord}

object ScalaProducerExample  {
  val topic = "test"
  val brokers = "<broker>:9092"
  val props = new Properties()
  props.put("bootstrap.servers", brokers)
  props.put("client.id", "ScalaProducerExample")
  props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
  props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
  val producer = new KafkaProducer[String, String](props)
  val data = new ProducerRecord[String, String](topic, "message")
  producer.send(data)
  producer.close()
}

这是build.sbt文件中加载的依赖项:

libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.8.2.1"

libraryDependencies += "org.apache.kafka" %% "kafka" % "0.10.2.0"

我甚至用Java编写它,同样的事情正在发生。

import org.apache.kafka.clients.ClientRequest;
import org.apache.kafka.clients.ClientResponse;
import org.apache.kafka.clients.KafkaClient;
import org.apache.kafka.clients.RequestCompletionHandler;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.Node;
import org.apache.kafka.common.requests.AbstractRequest;

import java.io.IOException;
import java.util.Date;
import java.util.List;
import java.util.Properties;
import java.util.Random;

public class ProducerExample {
    public static void main(String[] args) {
        String topic = "test";
        String brokers = "<broker>:9092";
        System.out.println("init " );
        Properties props = new Properties();
        props.put("bootstrap.servers", brokers);
        props.put("acks", "all");
        props.put("retries", 0);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);


        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        System.out.println("creating prducer " );
        KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);
        producer.flush();
        producer.send(new ProducerRecord<>(topic, "1", "2"));
        producer.close();
        System.out.println("close  " );
    }
}

built.sbt中的依赖项是:

libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.8.2.1"

我知道连接有效,因为当我更改代理时,我收到错误。但是当代理正确时,程序运行成功,但我没有收到任何消息。

更新:我假设程序成功运行的原因是它提供了超时。 我跑了这个

try {
            producer.send(new ProducerRecord<>(topic, "1", "2")).get(30, TimeUnit.SECONDS);
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch (ExecutionException e) {
            e.printStackTrace();
        } catch (TimeoutException e) {
            e.printStackTrace();
        }

得到了这个错误:

java.util.concurrent.TimeoutException: Timeout after waiting for 30000 ms.
        at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
        at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25)
        at de.innocow.kafka.ProducerExample.main(ProducerExample.java:45)

我如何调试超过这个并调查生产者没有发送的原因?

1 个答案:

答案 0 :(得分:0)

producer.send(new ProducerRecord<>(topic, "1", "2"));
producer.flush();            
producer.close();

尝试使用此功能并查看Docs

The flush() call gives a convenient way to ensure all previously sent messages have actually completed. 
 This example shows how to consume from one Kafka topic and produce to another Kafka topic:
 for(ConsumerRecord<String, String> record: consumer.poll(100))
     producer.send(new ProducerRecord("my-topic", record.key(), record.value());
 producer.flush();
 consumer.commit();