使用模式将ConsumerRecord值转换为spark-kafka中的Dataframe

时间:2017-09-13 14:45:12

标签: scala apache-spark apache-kafka

我正在使用Spark 2.0.2,使用Kafka 0.11.0,以及 我试图在火花流中消费来自kafka的消息。以下是代码:

val topics = "notes"
val kafkaParams = Map[String, Object](
  "bootstrap.servers" -> "localhost:7092",
  "schema.registry.url" -> "http://localhost:7070",
  "group.id" -> "connect-cluster1",
  "value.deserializer" -> "io.confluent.kafka.serializers.KafkaAvroDeserializer",
  "key.deserializer" -> "io.confluent.kafka.serializers.KafkaAvroDeserializer"
)
val topicSet: Set[String] = Set(topics)
val stream = KafkaUtils.createDirectStream[String, String](
  SparkStream.ssc,
  PreferConsistent,
  Subscribe[String, String](topicSet, kafkaParams)
)
stream.foreachRDD ( rdd => {
  rdd.foreachPartition(iterator => {
    while (iterator.hasNext) {
      val next = iterator.next()
      println(next.value())
    }
  })
})

如果Kafka消息包含记录,则输出为:

{"id": "4164a489-a0bb-4ea1-a259-b4e2a4519eee", "createdat": 1505312886984, "createdby": "karthik", "notes": "testing20"}
{"id": "4164a489-a0bb-4ea1-a259-b4e2a4519eee", "createdat": 1505312890472, "createdby": "karthik", "notes": "testing21"}

因此,从consumerRecord的值可以看出,收到的消息是Avro解码的。 现在我需要数据帧格式的那些记录,但我不知道如何从这里开始,即使手头的模式如下:

val sr : CachedSchemaRegistryClient = new CachedSchemaRegistryClient("http://localhost:7070", 1000)
val m = sr.getLatestSchemaMetadata(topics + "-value")
val schemaId = m.getId
val schemaString = m.getSchema

val schemaRegistry : CachedSchemaRegistryClient = new CachedSchemaRegistryClient("http://localhost:7070", 1000)
val decoder: KafkaAvroDecoder = new KafkaAvroDecoder(schemaRegistry)
val parser = new Schema.Parser()
val avroSchema = parser.parse(schemaString)
println(avroSchema)

打印的架构如下:

{"type":"record","name":"notes","namespace":"db","fields":[{"name":"id","type":["null","string"],"default":null},{"name":"createdat","type":["null",{"type":"long","connect.version":1,"connect.name":"org.apache.kafka.connect.data.Timestamp","logicalType":"timestamp-millis"}],"default":null},{"name":"createdby","type":["null","string"],"default":null},{"name":"notes","type":["null","string"],"default":null}],"connect.name":"db.notes"}

任何人都可以帮助我了解如何从消费者记录的价值中获取数据框架吗?我查看了其他问题,例如Use schema to convert AVRO messages with Spark to DataFrameHandling schema changes in running Spark Streaming application,但他们并没有在第一时间处理consumerRecord。

2 个答案:

答案 0 :(得分:3)

您可以使用以下代码段: stream是kafka010的kafkaUtils api返回的消费者记录的DStream:

stream.foreachRDD(rdd =>
    if (!rdd.isEmpty()) {
        val sqlContext = SQLContext.getOrCreate(rdd.sparkContext)
        import sqlContext.implicits._
        val topicValueStrings = rdd.map(record => (record.value()).toString)
        val df = sqlContext.read.json(topicValueStrings)
        df.show()
    })

答案 1 :(得分:0)

我是scala \ kafka \ spark的新手,所以我不确定这是否能完全回答这个问题,但它会帮助我。我确信有一种比这更好的方式,所以希望有更多经验的人能够出现并提供更好的答案。

// KafkaRDD
stream.foreachRDD { rdd => {

  // pull the values I'm looking for into a string array
  var x = rdd.map(row => row.value()).collect()

  // convert to dataframe
  val df = spark.createDataFrame(x).toDF("record")

  // write data frame to datastore (MySQL in my case)
  df.write
    .mode(SaveMode.Append)
    .jdbc(url, table, props)

  }
}