Kafka 0.9.0和Spark Streaming 2.1.0:kafka.cluster.BrokerEndPoint无法强制转换为kafka.cluster.Broker

时间:2017-03-20 16:08:10

标签: scala apache-kafka spark-streaming

我遇到了Spark Streaming 2.1.0,Scala 2.11.8和Kafka 0.9.0的问题。这些是我的依赖:

 libraryDependencies ++= Seq(
    "org.apache.spark" %% "spark-core" % "2.1.0",
    "org.apache.spark" %% "spark-streaming" % "2.1.0",
    "org.apache.spark" %% "spark-streaming-kafka-0-8" % "2.1.0",
    "org.apache.kafka" % "kafka-clients" % "0.9.0.0",
    "org.apache.kafka" %% "kafka" % "0.9.0.0"
),

这是我的Spark Streaming Code连接到Kafka服务器:

 def initializeKafka(ssc: StreamingContext, topic: String):
   InputDStream[(String,String)] = {
    val lioncubConf = Configuration

    val kafkaParams = Map[String, String](
      "metadata.broker.list" -> "debian:9092",
      "group.id" -> "222",
      "auto.offset.reset" -> "smallest",
      "enable.auto.commit" -> "false")

    KafkaUtils.createDirectStream
      [String, String, StringDecoder, StringDecoder](ssc, kafkaParams, Set(topic))
}

目前我在我的kafka_2.11-0.9.0.0中设置为“debian”的Debian VM上运行/etc/hosts。这是一个单一的经纪人。该主题是在Kafka快速入门(https://kafka.apache.org/090/documentation/#quickstart)之后创建的。我必须使用这个版本的kafka,因为它是群集中我必须部署我的软件的版本。

我遇到以下错误:

Exception in thread "main" java.lang.ClassCastException: kafka.cluster.BrokerEndPoint cannot be cast to kafka.cluster.Broker
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6$$anonfun$apply$7.apply(KafkaCluster.scala:97)
at scala.Option.map(Option.scala:146)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6.apply(KafkaCluster.scala:97)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6.apply(KafkaCluster.scala:94)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3.apply(KafkaCluster.scala:94)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3.apply(KafkaCluster.scala:93)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:94)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2.apply(KafkaCluster.scala:93)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2.apply(KafkaCluster.scala:92)
at scala.util.Either$RightProjection.flatMap(Either.scala:522)
at org.apache.spark.streaming.kafka.KafkaCluster.findLeaders(KafkaCluster.scala:92)
at org.apache.spark.streaming.kafka.KafkaCluster.getLeaderOffsets(KafkaCluster.scala:186)
at org.apache.spark.streaming.kafka.KafkaCluster.getLeaderOffsets(KafkaCluster.scala:168)
at org.apache.spark.streaming.kafka.KafkaCluster.getEarliestLeaderOffsets(KafkaCluster.scala:162)
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$5.apply(KafkaUtils.scala:213)
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$5.apply(KafkaUtils.scala:211)
at scala.util.Either$RightProjection.flatMap(Either.scala:522)
at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:211)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:484)
at hfu.lioncub.KafkaInitializer$.initializeKafka(KafkaInitializer.scala:22)
at hfu.lioncub.LioncubStream$.main(LioncubStream.scala:44)
at hfu.lioncub.LioncubStream.main(LioncubStream.scala)

这似乎是版本冲突。但我无法弄明白。我有什么想法吗?

1 个答案:

答案 0 :(得分:0)

我通过将Kafka版本更改为org.apache.kafka" % "kafka-clients" % "0.8.2.1并删除"org.apache.kafka" %% "kafka" % "0.9.0.0"来修复此问题。然后我不得不解决hdfs的配置无法解决的问题,我还通过添加MergeStrategy来修复:case PathList("META-INF", "services", "org.apache.hadoop.fs.FileSystem") => MergeStrategy.filterDistinctLines