Kafka生产者无法向服务器发送数据

时间:2016-12-28 02:21:35

标签: java apache-kafka kafka-producer-api

这是我的代码。我能够创建主题但由于某种原因无法在主题内发送数据。很长一段时间后我都收到了这些错误。我使用 kafka版本2.11-0.8.2.1

org.apache.kafka.clients.producer.KafkaProducer$FutureFailure@5474c6c
org.apache.kafka.clients.producer.KafkaProducer$FutureFailure@4b6995df

这是kafka的server.log文件

[2016-12-27 21:05:54,873] ERROR Closing socket for /127.0.0.1 because of error (kafka.network.Processor)
java.io.IOException: An established connection was aborted by the software in your host machine
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(Unknown Source)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source)
at sun.nio.ch.IOUtil.read(Unknown Source)
at sun.nio.ch.SocketChannelImpl.read(Unknown Source)
at kafka.utils.Utils$.read(Utils.scala:380)
at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
at kafka.network.Processor.read(SocketServer.scala:444)
at kafka.network.Processor.run(SocketServer.scala:340)
at java.lang.Thread.run(Unknown Source)
[2016-12-27 21:07:54,727] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor)
[2016-12-27 21:16:08,559] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor)

这是我的java代码,用于将整数发送到kafka系统:

Properties props = new Properties();
    props.put("bootstrap.servers", "localhost:9092");
    props.put("acks", "all");
    props.put("retries", 0);
    props.put("batch.size", 16384);
    props.put("linger.ms", 1);
    props.put("buffer.memory", 33554432);
    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    props.put("timeout.ms", "50");

    Producer<String, String> producer = new KafkaProducer<>(props);
         for(int i = 0; i < 2; i++)
             System.out.println(producer.send(new ProducerRecord<String, String>("testtopic", Integer.toString(i), 
                     Integer.toString(i))).toString());

producer.close();

这是pom.xml

<dependencies>
     <dependency>
     <groupId>org.apache.kafka</groupId>
     <artifactId>kafka-clients</artifactId>
     <version>0.10.1.0</version>
</dependency>  
<dependency>
  <groupId>org.apache.kafka</groupId>
  <artifactId>kafka_2.11</artifactId>
  <version>0.8.2.1</version>
</dependency>  
<dependency>
  <groupId>org.slf4j</groupId>
  <artifactId>slf4j-simple</artifactId>
  <version>1.6.4</version>
</dependency>
<dependency>
  <groupId>log4j</groupId>
  <artifactId>log4j</artifactId>
  <version>1.2.16</version>
  <exclusions>
    <exclusion>
      <groupId>javax.jms</groupId>
      <artifactId>jms</artifactId>
    </exclusion>
  </exclusions>
</dependency>
</dependencies>

2 个答案:

答案 0 :(得分:0)

除了

之外什么都没有
props.put("timeout.ms", "50");

请求超时应该大于默认轮询间隔,默认情况下在Kafka中为5分钟。所以我想如果将它保留为默认值(刚好超过5分钟)它应该可以工作。

答案 1 :(得分:0)

我将Kafka版本降级为 kafka_2.10-0.9.0.0 ,以下属性可以使用它。

    Properties props = new Properties();
    props.put("metadata.broker.list", "localhost:9092");
    props.put("acks", "all");
    props.put("retries", 0);
    props.put("batch.size", 16384);
    props.put("linger.ms", 1);
    props.put("buffer.memory", 33554432);
    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("serializer.class", "kafka.serializer.StringEncoder");
    ProducerConfig producerConfig = new ProducerConfig(props);
    kafka.javaapi.producer.Producer<String, String> producer 
    = new kafka.javaapi.producer.Producer<String, String>(producerConfig);

我的Pom.xml文件如下:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>TwitterKafkaPostgre</groupId>
<artifactId>TwitterKafkaPostgre</artifactId>
<version>0.0.1-SNAPSHOT</version>
 <dependencies>
  <dependency>
    <groupId>com.twitter</groupId>
    <artifactId>hbc-core</artifactId> <!-- or hbc-twitter4j -->
    <version>2.2.0</version> <!-- or whatever the latest version is -->
  </dependency>
  <dependency>
   <groupId>org.apache.kafka</groupId>
   <artifactId>kafka-clients</artifactId>
   <version>0.9.0.0</version>
 </dependency>  
 <dependency>
   <groupId>org.apache.kafka</groupId>
   <artifactId>kafka_2.11</artifactId>
   <version>0.9.0.0</version>
 </dependency>
 <dependency>
   <groupId>log4j</groupId>
   <artifactId>log4j</artifactId>
   <version>1.2.16</version>
   <exclusions>
     <exclusion>
       <groupId>javax.jms</groupId>
       <artifactId>jms</artifactId>
     </exclusion>
   </exclusions>
 </dependency>
<dependency>
  <groupId>org.slf4j</groupId>
  <artifactId>slf4j-simple</artifactId>
  <version>1.6.4</version>
</dependency>
<dependency>
  <groupId>com.google.guava</groupId>
  <artifactId>guava</artifactId>
  <version>18.0</version>
</dependency>