JSON kafka主题上的Applied Schema给出了所有空字段

时间:2018-11-01 09:42:26

标签: apache-spark pyspark apache-kafka spark-structured-streaming

我正在使用hortonworks工具套件,并尝试将来自kafka主题的json数据解析为数据框。但是,当我查询内存表时,数据框的架构似乎是正确的,但是所有值均为null,我真的不知道为什么。

进入kafka主题的json数据如下所示:

{"index":"0","Conrad":"Persevering system-worthy intranet","address":"8905 Robert Prairie\nJoefort, LA 41089","bs":"envisioneer web-enabled mindshare","city":"Davidland","date_time":"1977-06-26 06:12:48","email":"eric56@parker-robinson.com","paragraph":"Kristine Nash","randomdata":"Growth special factor bit only. Thing agent follow moment seat. Nothing agree that up view write include.","state":"1030.0"}

Zeppelin笔记本中的代码如下:

%dep 
z.load("org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.1")

%pyspark

#Defining my schema

from pyspark.sql.types import StructType , StringType , LongType , IntegerType

schema = StructType().add("index", IntegerType()).add("Conrad", StringType()).add("address",StringType()).add("bs",StringType()).add("city",StringType()).add("date_time",LongType()).add("email",StringType()).add("name",StringType()).add("paragraph",StringType()).add("randomdata",IntegerType()).add("state",StringType())

# Read data from kafka topic

lines = spark.readStream.format("kafka").option("kafka.bootstrap.servers","x.x.x.x:2181").option("startingOffsets", "latest").option("subscribe","testdata").load().select(from_json(col("value").cast("string"), schema).alias("parsed_value"))

# Start the stream and query the in-memory table
query=lines.writeStream.format("memory").queryName("t10").start()
raw= spark.sql("select parsed_value.* from t10")

我目前正在明确定义架构,但我的最终目标是从Hortonworks Schema Registry获取avro架构。如果有人能告诉我如何做到这一点也很好。

谢谢!

0 个答案:

没有答案