为什么sbt找不到KafkaUtils?

时间:2019-01-11 19:44:52

标签: scala apache-spark apache-kafka sbt spark-streaming

使用SBT编译的代码(来自Kafka的wordCount)中出现此错误

[error] /home/hduser/sbt_project/project1/src/main/scala/sparkKafka.scala:4:35:            object kafka is not a member of package org.apache.spark.streaming`
[error] import org.apache.spark.streaming.kafka.KafkaUtils
 not found: value KafkaUtils
[error] val lines = KafkaUtils.createStream(ssc, "localhost:2181", "spark-stream           ing-consumer-group", Map("customer" -> 2))

文件build.sbt包含以下依赖项:

libraryDependencies += "org.apache.spark" % "spark-core_2.12" % "2.4.0"
libraryDependencies += "org.apache.spark" % "spark-streaming_2.12" % "2.4.0"
libraryDependencies += "org.apache.spark" % "spark-streaming-kafka-0-10_2.12" % "2.4.0"

如何正确导入KafkaUtils

1 个答案:

答案 0 :(得分:2)

KafkaUtilsorg.apache.spark.streaming.kafka010包中(请注意,导入的名称空间包括版本kafka010)。

来自Spark Streaming Kafka Documentation

import org.apache.spark.streaming.kafka010._
// ...

// val streamingContext = ...

val kafkaParams = Map[String, Object](
  "bootstrap.servers" -> "localhost:9092",
  "key.deserializer" -> classOf[StringDeserializer],
  "value.deserializer" -> classOf[StringDeserializer],
  "group.id" -> "use_a_separate_group_id_for_each_stream",
  "auto.offset.reset" -> "latest",
  "enable.auto.commit" -> (false: java.lang.Boolean)
)

val topics = Array("topicA", "topicB")
val stream = KafkaUtils.createDirectStream[String, String](
  streamingContext,
  PreferConsistent,
  Subscribe[String, String](topics, kafkaParams)
)

注意:通常建议改用Spark Structured Streaming with Kafka