错误“从代理[ArrayBuffer(id:0,ip,port:9092)”获取主题[Set(topicname)]的主题元数据错误的原因可能是“

时间:2017-07-17 07:07:21

标签: apache-kafka spark-streaming

我有一个火花流工作,它读取Kafka(经纪人A)和主题(主题A)的数据,经过处理后我发送给另一个kafka经纪人(经纪人B)和主题(主题B),同样也是消息发送mqtt经纪人。工作运行一段时间约1小时,之后我得到低于错误。

17/07/12 17:47:22 ERROR Utils$: fetching topic metadata for topics [Set(topicname)] from broker [ArrayBuffer(id:0,host:xxx.xxx.xxx.xxx,port:9092)] failed
kafka.common.KafkaException: fetching topic metadata for topics [Set(topicname)] from broker [ArrayBuffer(id:0,host:xxx.xxx.xxx.xxx,port:9092)] failed
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
        at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
        at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67)
        at kafka.utils.Utils$.swallow(Utils.scala:172)
        at kafka.utils.Logging$class.swallowError(Logging.scala:106)
        at kafka.utils.Utils$.swallowError(Utils.scala:45)
        at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67)
        at kafka.producer.Producer.send(Producer.scala:77)
        at kafka.javaapi.producer.Producer.send(Producer.scala:33)
        at com.test.spark.streaming.JobKafka$3$1.call(JobKafka.java:194)
        at com.test.spark.streaming.JobKafka$3$1.call(JobKafka.java:167)
        at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartitionAsync$1.apply(JavaRDDLike.scala:741)
        at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartitionAsync$1.apply(JavaRDDLike.scala:741)
        at org.apache.spark.SparkContext$$anonfun$34.apply(SparkContext.scala:2021)
        at org.apache.spark.SparkContext$$anonfun$34.apply(SparkContext.scala:2021)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
        at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
        at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
        at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
        ... 20 more
17/07/12 17:47:22 ERROR DefaultEventHandler: Failed to collate messages by topic, partition due to: fetching topic metadata for topics [Set(topicname)] from broker [ArrayBuffer(id:0,host:xxx.xxx.xxx.xxx,port:9092)] failed
17/07/12 17:47:22 ERROR Utils$: fetching topic metadata for topics [Set(topicname)] from broker [ArrayBuffer(id:0,host:xxx.xxx.xxx.xxx,port:9092)] failed
kafka.common.KafkaException: fetching topic metadata for topics [Set(topicname)] from broker [ArrayBuffer(id:0,host:xxx.xxx.xxx.xxx,port:9092)] failed
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
        at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
        at kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:78)
        at kafka.utils.Utils$.swallow(Utils.scala:172)
        at kafka.utils.Logging$class.swallowError(Logging.scala:106)
        at kafka.utils.Utils$.swallowError(Utils.scala:45)
        at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:78)
        at kafka.producer.Producer.send(Producer.scala:77)
        at kafka.javaapi.producer.Producer.send(Producer.scala:33)
        at com.test.spark.streaming.JobKafka$3$1.call(JobKafka.java:194)
        at com.test.spark.streaming.JobKafka$3$1.call(JobKafka.java:167)
        at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartitionAsync$1.apply(JavaRDDLike.scala:741)
        at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartitionAsync$1.apply(JavaRDDLike.scala:741)
        at org.apache.spark.SparkContext$$anonfun$34.apply(SparkContext.scala:2021)
        at org.apache.spark.SparkContext$$anonfun$34.apply(SparkContext.scala:2021)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
        at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
        at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
        at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
        ... 20 more
17/07/12 17:47:22 ERROR DefaultEventHandler: Failed to collate messages by topic, partition due to: fetching topic metadata for topics [Set(topicname)] from broker [ArrayBuffer(id:0,host:xxx.xxx.xxx.xxx,port:9092)] failed
17/07/12 17:47:22 ERROR Utils$: fetching topic metadata for topics [Set(topicname)] from broker [ArrayBuffer(id:0,host:xxx.xxx.xxx.xxx,port:9092)] failed
kafka.common.KafkaException: fetching topic metadata for topics [Set(topicname)] from broker [ArrayBuffer(id:0,host:xxx.xxx.xxx.xxx,port:9092)] failed
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
        at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
        at kafka.producer.async.DefaultEventHandler$$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:78)
        at kafka.utils.Utils$.swallow(Utils.scala:172)
        at kafka.utils.Logging$class.swallowError(Logging.scala:106)
        at kafka.utils.Utils$.swallowError(Utils.scala:45)
        at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:78)
        at kafka.producer.Producer.send(Producer.scala:77)
        at kafka.javaapi.producer.Producer.send(Producer.scala:33)
        at com.test.spark.streaming.JobKafka$3$1.call(JobKafka.java:194)
        at com.test.spark.streaming.JobKafka$3$1.call(JobKafka.java:167)
        at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartitionAsync$1.apply(JavaRDDLike.scala:741)
        at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartitionAsync$1.apply(JavaRDDLike.scala:741)
        at org.apache.spark.SparkContext$$anonfun$34.apply(SparkContext.scala:2021)
        at org.apache.spark.SparkContext$$anonfun$34.apply(SparkContext.scala:2021)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
        at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
        at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
        at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
        ... 20 more

以下是示例代码

 basicGpsData.foreachRDD(new VoidFunction<JavaRDD<GpsData>>() {


        @Override
        public void call(JavaRDD<GpsData> rdd) throws Exception {



            Properties properties = new Properties();
            properties.put("metadata.broker.list",kafkatotopics);
            properties.put("serializer.class","kafka.serializer.StringEncoder");

            rdd.foreachPartitionAsync(new VoidFunction<Iterator<GpsData>>() {
                ObjectMapper mapper = new ObjectMapper();
                @Override
                public void call(Iterator<GpsData> partitionRdd) throws Exception {
                    MemoryPersistence persistence = new MemoryPersistence();
                    MqttClient client = new MqttClient(mqttbroker, MqttClient.generateClientId(), persistence);


                    MqttConnectOptions options = new MqttConnectOptions();
                    options.setMaxInflight(1000);
                    client.connect(options);

                    ProducerConfig producerConfig = new ProducerConfig(properties);
                    kafka.javaapi.producer.Producer<String,String> producer = new kafka.javaapi.producer.Producer<String, String>(producerConfig);
                    while(partitionRdd.hasNext()){
                        GpsData gpsData = partitionRdd.next();
                        String json = mapper.writeValueAsString(gpsData);
                        System.out.println(" The data is sending to kafka : "+json);
                        // to send to the kafka
                        KeyedMessage<String, String> kafkaMessage =new KeyedMessage<String, String>(totopics,json);
                        producer.send(kafkaMessage);
                        System.out.println(" The data is sending to MQTT Broker : "+json);
                        MqttTopic msgtopic = client.getTopic(mqtttopic+gpsData.getImei());
                        MqttMessage mqttMessage = new MqttMessage();
                        mqttMessage.setPayload(json.getBytes());
                        msgtopic.publish(mqttMessage);

                    }
                    client.disconnect();

                }
            });

        }
    });

1小时后得到以下错误。

0 个答案:

没有答案