从Windows客户端到远程Linux服务器,Kafka生成或消耗msg失败

时间:2016-11-08 15:00:26

标签: windows apache-kafka producer-consumer kafka-producer-api

我都将kafka_2.10-0.10.0.1下载到我的windows和我的linux机器上(我有一个有3台linux机器的集群,192.168.80.128 / 129/130)。所以,我用我的windows机器作为kafka客户端和Linux机器作为kafka服务器。我尝试从我的Windows生成msg到远程kafka服务器,命令和响应如下:

F:\kafka_2.10-0.10.0.1\kafka_2.10-0.10.0.1\bin\windows>kafka-console-pr
oducer.bat --broker-list 192.168.80.128:9092 --topic wuchang
DADFASDF
ASDFASF
[2016-11-08 22:41:30,311] ERROR Error when sending message to topic wuchang with
 key: null, value: 8 bytes with error: (org.apache.kafka.clients.producer.intern
als.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 2 record(s) ex
pired due to timeout while requesting metadata from brokers for wuchang-0
[2016-11-08 22:41:30,313] ERROR Error when sending message to topic wuchang with
 key: null, value: 7 bytes with error: (org.apache.kafka.clients.producer.intern
als.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 2 record(s) ex
pired due to timeout while requesting metadata from brokers for wuchang-0

我非常确定我的kafka群集是正常的,因为我直接在我的linux服务器上运行了produ和consume命令。

Ofcource,来自远程kafka服务器的comsume消息也失败了:

F:\kafka_2.10-0.10.0.1\kafka_2.10-0.10.0.1\bin\windows>kafka-console-co
nsumer.bat --bootstrap-server 192.168.80.128:9092 --topic wuchang --from-beginni
ng --zookeeper 192.168.80.128:2181
[2016-11-08 22:56:43,486] WARN Fetching topic metadata with correlation id 0 for
 topics [Set(wuchang)] from broker [BrokerEndPoint(1,vm02,9092)] failed (kafka.c
lient.ClientUtils$)
java.nio.channels.ClosedChannelException
        at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
        at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:80)
        at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncP
roducer.scala:79)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:124)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
        at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(Consu
merFetcherManager.scala:66)
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

另外,我想在我的windows机器上尝试kafka java api示例,也没有任何错误消息失败,我的java代码是:

package com.netease.ecom.data.connect.hdfs;


import com.twitter.bijection.Injection;
import com.twitter.bijection.avro.GenericAvroCodecs;
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericRecord;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class SimpleAvroProducer {

    public static final String USER_SCHEMA = "{"
            + "\"type\":\"record\","
            + "\"name\":\"myrecord\","
            + "\"fields\":["
            + "  { \"name\":\"str1\", \"type\":\"string\" },"
            + "  { \"name\":\"str2\", \"type\":\"string\" },"
            + "  { \"name\":\"int1\", \"type\":\"int\" }"
            + "]}";

    public static void main(String[] args) throws InterruptedException {
        Properties props = new Properties();
        props.put("bootstrap.servers", "192.168.80.128:9092");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer");

        Schema.Parser parser = new Schema.Parser();
        Schema schema = parser.parse(USER_SCHEMA);
        Injection<GenericRecord, byte[]> recordInjection = GenericAvroCodecs.toBinary(schema);

        KafkaProducer<String, byte[]> producer = new KafkaProducer<>(props);

        for (int i = 0; i < 1000; i++) {
            GenericData.Record avroRecord = new GenericData.Record(schema);
            avroRecord.put("str1", "Str 1-" + i);
            avroRecord.put("str2", "Str 2-" + i);
            avroRecord.put("int1", i);

            byte[] bytes = recordInjection.apply(avroRecord);

            ProducerRecord<String, byte[]> record = new ProducerRecord<>("mytopic", bytes);
            producer.send(record);

            Thread.sleep(250);

        }

        producer.close();
    }
}

是的,我的代码想要将avro数据发送到kafka,它也没有任何错误而失败。

我的linux机器上的一个kafka server.properties是:

0 个答案:

没有答案