如何在结构化流下通过spark从kafka行中提取值?

时间:2019-04-30 19:05:26

标签: scala apache-spark apache-kafka spark-streaming

给出我从Kafka提取的数据框。如何通过模式匹配从中提取值?

数据框:

df = spark \
  .readStream \
  .format("kafka") \
  .option("kafka.bootstrap.servers", "host1:port1,host2:port2") \
  .option("subscribe", "topic1") \
  .option("startingOffsets", "earliest") \
  .load()

我的问题是架构看起来像这样:

df.printSchema()

root
 |-- key: binary (nullable = true)
 |-- value: binary (nullable = true)
 |-- topic: string (nullable = true)
 |-- partition: integer (nullable = true)
 |-- offset: long (nullable = true)
 |-- timestamp: timestamp (nullable = true)
 |-- timestampType: integer (nullable = true)

该二进制类型是我无法模式匹配的。我将如何提取该值然后对其进行解析?

1 个答案:

答案 0 :(得分:2)

  

问题:我将如何提取该值然后对其进行解析?

我假设您正在使用avro消息,并且可以按照以下代码片段进行尝试(我不知道您要在此处进行模式匹配的内容)decodeAndParseObject函数使用twitters bijection api 具有以下依赖性

<!-- https://mvnrepository.com/artifact/com.twitter/bijection-avro -->
<dependency>
    <groupId>com.twitter</groupId>
    <artifactId>bijection-avro_2.10</artifactId>
    <version>0.7.0</version>
</dependency>

val ds = df.select("value").as[Array[Byte]].map(x=>decodeAndParseObject(x))

其中

import org.apache.avro.generic.GenericRecord
import com.twitter.bijection.Injection
import com.twitter.bijection.avro.GenericAvroCodecs
/**
* decode and parse binary based on your schema... your logic goes here
*/
def decodeAndParseObject(message: Array[Byte]) =  {

val schema = new Schema.Parser().parse("yourschemahere")

val recordInjection: Injection[GenericRecord, Array[Byte]] = 

GenericAvroCodecs.toBinary(schema)

val record: GenericRecord = recordInjection.invert(message).get
println(record.getSchema)
record.getSchema.getFields.toArray().foreach(println)
println("\n\n\n\n\n\n Record " + record.toString.replaceAll(",", "\n"))
//get the column and do pattern matching....
// Prepare another generic record .... I'm leaving it as blank here...

record   

}

更新: 您可以使用上面的通用记录并获取要使用的列 record.get("yourcolumn")并为此进行scala模式匹配。