无法找到Set的领导者([TOPICNNAME,0]))当我们熟悉Apache Saprk时

时间:2015-11-20 05:42:24

标签: apache-spark apache-kafka spark-streaming

我们正在使用Apache Spark 1.5.1和kafka_2.10-0.8.2.1以及Kafka DirectStream API来使用Spark从Kafka获取数据。

我们使用以下设置在Kafka中创建了主题

ReplicationFactor:1和Replica:1

当所有Kafka实例都在运行时,Spark工作正常。但是,当群集中的某个Kafka实例关闭时,我们将获得下面的例外情况。一段时间后,我们重新启动了禁用的Kafka实例并尝试完成Spark工作,但Spark因为异常而已经终止。因此,我们无法阅读Kafka主题中的其余消息。

ERROR DirectKafkaInputDStream:125 - ArrayBuffer(org.apache.spark.SparkException: Couldn't find leaders for Set([normalized-tenant4,0]))
ERROR JobScheduler:96 - Error generating jobs for time 1447929990000 ms
org.apache.spark.SparkException: ArrayBuffer(org.apache.spark.SparkException: Couldn't find leaders for Set([normalized-tenant4,0]))
        at org.apache.spark.streaming.kafka.DirectKafkaInputDStream.latestLeaderOffsets(DirectKafkaInputDStream.scala:123)
        at org.apache.spark.streaming.kafka.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:145)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:349)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:349)
        at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:399)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:344)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:342)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:339)
        at org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:38)
        at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:120)
        at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:120)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
        at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
        at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:120)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$2.apply(JobGenerator.scala:247)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$2.apply(JobGenerator.scala:245)
        at scala.util.Try$.apply(Try.scala:161)
        at org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:245)
        at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:181)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:87)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:86)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

提前致谢。请帮助解决此问题。

2 个答案:

答案 0 :(得分:4)

这是预期的行为。您已通过将ReplicationFactor设置为一个来请求将每个主题存储在一台计算机上。当碰巧存储主题normalized-tenant4的一台机器被取下时,消费者找不到该主题的领导者。

请参阅http://kafka.apache.org/documentation.html#intro_guarantees

答案 1 :(得分:1)

这种类型的错误的原因之一是找不到指定主题的leader是一个人的Kafka服务器配置问题。

打开您的Kafka服务器配置:

vim ./kafka/kafka-<your-version>/config/server.properties

在“套接字服务器设置”部分中,为主机提供IP(如果缺少):

listeners=PLAINTEXT://{host-ip}:{host-port}

我正在使用MapR沙箱提供的Kafka设置,并试图通过spark代码访问kafka。由于我的配置缺少IP,我在访问我的kafka时遇到了同样的错误。