我正在尝试使用Scala中的sparkStreaming捕获Kafka事件(以序列化形式获取)。
这是我的代码段:
val spark = SparkSession.builder().master("local[*]").appName("Spark-Kafka-Integration").getOrCreate()
spark.conf.set("spark.driver.allowMultipleContexts", "true")
val sc = spark.sparkContext
val ssc = new StreamingContext(sc, Seconds(5))
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
val topics=Set("<topic-name>")
val brokers="<some-list>"
val groupId="spark-streaming-test"
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> brokers,
"auto.offset.reset" -> "earliest",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer",
"group.id" -> groupId,
"enable.auto.commit" -> (false: java.lang.Boolean)
)
val messages: InputDStream[ConsumerRecord[String, String]] =
KafkaUtils.createDirectStream[String, String](
ssc,
LocationStrategies.PreferConsistent,
ConsumerStrategies.Subscribe[String, String](topics, kafkaParams)
)
messages.foreachRDD { rdd =>
println(rdd.toDF())
}
ssc.start()
ssc.awaitTermination()
我收到以下错误消息: 错误:(59,19)值toDF不是org.apache.spark.rdd.RDD [org.apache.kafka.clients.consumer.ConsumerRecord [String,String]]的成员println(rdd.toDF())< / p>
答案 0 :(得分:1)
toDF
通过DatasetHolder
https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.SQLImplicits
我还没有复制它,但是我的猜测是ConsumerRecord[String, String]
没有编码器,因此您可以提供一个编码器,也可以先将其映射到可以派生Encoder
的对象(案例类或原始)
foreachRDD
中的println也可能由于spark的分布式性质而无法按照您想要的方式运行