Spark Streaming Kafka流

时间:2015-12-07 23:43:10

标签: apache-spark apache-kafka spark-streaming spark-streaming-kafka

我在尝试使用火花流来读取kafka时遇到了一些问题。

我的代码是:

val sparkConf = new SparkConf().setMaster("local[2]").setAppName("KafkaIngestor")
val ssc = new StreamingContext(sparkConf, Seconds(2))

val kafkaParams = Map[String, String](
  "zookeeper.connect" -> "localhost:2181",
  "group.id" -> "consumergroup",
  "metadata.broker.list" -> "localhost:9092",
  "zookeeper.connection.timeout.ms" -> "10000"
  //"kafka.auto.offset.reset" -> "smallest"
)

val topics = Set("test")
val stream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topics)

我之前在端口2181启动了zookeeper,在端口9092启动了Kafka服务器0.9.0.0。 但是我在Spark驱动程序中遇到以下错误:

Exception in thread "main" java.lang.ClassCastException: kafka.cluster.BrokerEndPoint cannot be cast to kafka.cluster.Broker
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6$$anonfun$apply$7.apply(KafkaCluster.scala:90)
at scala.Option.map(Option.scala:145)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6.apply(KafkaCluster.scala:90)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$2$$anonfun$3$$anonfun$apply$6.apply(KafkaCluster.scala:87)

Zookeeper日志:

[2015-12-08 00:32:08,226] INFO Got user-level KeeperException when processing sessionid:0x1517ec89dfd0000 type:create cxid:0x34 zxid:0x1d3 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor)

任何提示?

非常感谢

2 个答案:

答案 0 :(得分:16)

问题与错误的spark-streaming-kafka版本有关。

documentation

中所述
  

Kafka:Spark Streaming 1.5.2与Kafka 0.8.2.1兼容

所以,包括

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka_2.10</artifactId>
    <version>0.8.2.2</version>
</dependency>

在我的pom.xml(而不是版本0.9.0.0)中解决了这个问题。

希望这有帮助

答案 1 :(得分:0)

Kafka10流媒体/ Spark 2.1.0 / DCOS / Mesosphere

Ugg我花了一整天时间,并且必须阅读这篇文章十几次。我尝试了火花2.0.0,2.0.1,Kafka 8,Kafka 10.远离Kafka 8和spark 2.0.x,依赖是一切。从下面开始。有用。

SBT:

"org.apache.hadoop" % "hadoop-aws" % "2.7.3" excludeAll ExclusionRule(organization = "org.apache.hadoop", name = "hadoop-common"),
"org.apache.spark" %% "spark-core" % "2.1.0",
"org.apache.spark" %% "spark-sql" % "2.1.0" ,
"org.apache.spark" % "spark-streaming-kafka-0-10_2.11" % "2.1.0",
"org.apache.spark" % "spark-streaming_2.11" % "2.1.0"

使用Kafka / Spark Streaming代码:

val spark = SparkSession
  .builder()
  .appName("ingest")
  .master("local[4]")
  .getOrCreate()

import spark.implicits._
val ssc = new StreamingContext(spark.sparkContext, Seconds(2))

val topics = Set("water2").toSet

val kafkaParams = Map[String, String](
  "metadata.broker.list"        -> "broker:port,broker:port",
  "bootstrap.servers"           -> "broker:port,broker:port",
  "group.id"                    -> "somegroup",
  "auto.commit.interval.ms"     -> "1000",
  "key.deserializer"            -> "org.apache.kafka.common.serialization.StringDeserializer",
  "value.deserializer"          -> "org.apache.kafka.common.serialization.StringDeserializer",
  "auto.offset.reset"           -> "earliest",
  "enable.auto.commit"          -> "true"
)

val messages = KafkaUtils.createDirectStream[String, String](ssc, PreferConsistent, Subscribe[String, String](topics, kafkaParams))

messages.foreachRDD(rdd => {
  if (rdd.count() >= 1) {
    rdd.map(record => (record.key, record.value))
      .toDS()
      .withColumnRenamed("_2", "value")
      .drop("_1")
      .show(5, false)
    println(rdd.getClass)
  }
})
ssc.start()
ssc.awaitTermination()

如果你看到了这一点,那么我可以得到一些声望点。 :)