Spark-shell错误对象映射不是包org.apache.spark.streaming.rdd

时间:2018-06-09 14:25:40

标签: scala apache-spark apache-spark-sql spark-streaming

我正在尝试使用spark streaming从Kafka主题valueStr1中读取json并解析两个值valueStr2KafkaStreamTestTopic1。并将其转换为数据框以供进一步处理。

我在spark-shell中运行代码,因此可以使用spark上下文sc

但是当我运行这个脚本时,它给了我以下错误:

  

错误:对象映射不是包org.apache.spark.streaming.rdd的成员            val dfa = rdd.map(record => {

以下是使用的脚本:

import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.spark.{SparkConf, TaskContext}
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.kafka010._
import org.apache.kafka.common.serialization.StringDeserializer
import play.api.libs.json._
import org.apache.spark.sql._

val ssc = new StreamingContext(sc, Seconds(5))

val sparkSession = SparkSession.builder().appName("myApp").getOrCreate()
val sqlContext = new SQLContext(sc)

// Create direct kafka stream with brokers and topics
val topicsSet = Array("KafkaStreamTestTopic1").toSet

// Set kafka Parameters
val kafkaParams = Map[String, String](
  "bootstrap.servers" -> "localhost:9092",
  "key.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer",
  "value.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer",
  "group.id" -> "my_group",
  "auto.offset.reset" -> "earliest",
  "enable.auto.commit" -> "false"
)

val stream = KafkaUtils.createDirectStream[String, String](
  ssc,
  LocationStrategies.PreferConsistent,
  ConsumerStrategies.Subscribe[String, String](topicsSet, kafkaParams)
)

val lines = stream.map(_.value)

lines.print()

case class MyObj(val one: JsValue)

lines.foreachRDD(rdd => {
  println("Debug Entered")

  import sparkSession.implicits._
  import sqlContext.implicits._


  val dfa = rdd.map(record => {

    implicit val myObjEncoder = org.apache.spark.sql.Encoders.kryo[MyObj]

    val json: JsValue = Json.parse(record)
    val value1 = (json \ "root" \ "child1" \ "child2" \ "valueStr1").getOrElse(null)
    val value2 = (json \ "root" \ "child1" \ "child2" \ "valueStr2").getOrElse(null)

    (new MyObj(value1), new MyObj(value2))

  }).toDF()

  dfa.show()
  println("Dfa Size is: " + dfa.count())


})

ssc.start()

2 个答案:

答案 0 :(得分:1)

我认为问题是rdd也是一个自动导入的包(org.apache.spark.streaming.rdd):

import org.apache.spark.streaming._

要避免这种冲突,请将您的变量重命名为其他内容,例如myRdd

lines.foreachRDD(myRdd => { /* ... */ })

答案 1 :(得分:0)

将spark-streaming的依赖项添加到构建管理器

     "org.apache.spark" %% "spark-mllib" % SparkVersion,
    "org.apache.spark" %% "spark-streaming-kafka-0-10" % 
     "2.0.1"

您可以在构建期间使用maven或SBT添加。