在读取/加载时将原始JSON保留为Spark DataFrame中的列?

时间:2018-05-07 15:28:03

标签: json apache-spark apache-spark-sql

在将数据读入Spark DataFrame时,我一直在寻找一种将原始(JSON)数据添加为列的方法。我有一种方法可以通过连接执行此操作,但我希望有一种方法可以在使用Spark 2.2.x +的单个操作中执行此操作。

例如数据:

{"team":"Golden Knights","colors":"gold,red,black","origin":"Las Vegas"}
{"team":"Sharks","origin": "San Jose", "eliminated":"true"}
{"team":"Wild","colors":"red,green,gold","origin":"Minnesota"}

执行时:

val logs = sc.textFile("/Users/vgk/data/tiny.json") // example data file
spark.read.json(logs).show

可以预见,我们得到了:

+--------------+----------+--------------------+--------------+
|        colors|eliminated|              origin|          team|
+--------------+----------+--------------------+--------------+
|gold,red,black|      null|           Las Vegas|Golden Knights|
|          null|      true|            San Jose|        Sharks|
|red,green,gold|      null|           Minnesota|          Wild|
|red,white,blue|     false|District of Columbia|      Capitals|
+--------------+----------+--------------------+--------------+

我在初始加载时想要的是上面的内容,但是将原始JSON数据作为附加列。例如(截断的原始值):

+--------------+-------------------------------+--------------+--------------------+
|        colors|eliminated|              origin|          team|               value|
+--------------+----------+--------------------+--------------+--------------------+
|red,white,blue|     false|District of Columbia|      Capitals|{"colors":"red,wh...|
|gold,red,black|      null|           Las Vegas|Golden Knights|{"colors":"gold,r...|
|          null|      true|            San Jose|        Sharks|{"eliminated":"tr...|
|red,green,gold|      null|           Minnesota|          Wild|{"colors":"red,gr...|
+--------------+----------+--------------------+--------------+--------------------+

非理想的解决方案涉及加入:

val logs = sc.textFile("/Users/vgk/data/tiny.json")
val df = spark.read.json(logs).withColumn("uniqueID",monotonically_increasing_id)
val rawdf = df.toJSON.withColumn("uniqueID",monotonically_increasing_id)
df.join(rawdf, "uniqueID")

这会产生与上述相同的数据框,但会添加uniqueID列。另外,json是从DF渲染的,不一定是“原始”数据。在实践中它们是相同的,但对于我的用例,实际的原始数据是可取的。

有人知道一个解决方案会将原始JSON数据捕获为加载时的附加列吗?

3 个答案:

答案 0 :(得分:2)

如果您拥有自己收到的数据架构,则可以from_jsonschema一起使用以获取所有字段并保留raw字段

val logs = spark.sparkContext.textFile(path) // example data file

val schema = StructType(
  StructField("team", StringType, true)::
  StructField("colors", StringType, true)::
  StructField("eliminated", StringType, true)::
  StructField("origin", StringType, true)::Nil
)

logs.toDF("values")
    .withColumn("json", from_json($"values", schema))
    .select("values", "json.*")

    .show(false)

输出:

+------------------------------------------------------------------------+--------------+--------------+----------+---------+
|values                                                                  |team          |colors        |eliminated|origin   |
+------------------------------------------------------------------------+--------------+--------------+----------+---------+
|{"team":"Golden Knights","colors":"gold,red,black","origin":"Las Vegas"}|Golden Knights|gold,red,black|null      |Las Vegas|
|{"team":"Sharks","origin": "San Jose", "eliminated":"true"}             |Sharks        |null          |true      |San Jose |
|{"team":"Wild","colors":"red,green,gold","origin":"Minnesota"}          |Wild          |red,green,gold|null      |Minnesota|
+------------------------------------------------------------------------+--------------+--------------+----------+---------+

希望他的帮助!

答案 1 :(得分:1)

使用 rdd 映射器读取每一行,并操作字符串以将原始行添加到 json 字符串中,然后将该 rdd 解析为数据帧 json 读取器。

def addRawToJson(line):
    line = line.strip()
    rawJson = line.replace('\\', '\\\\').replace('"', '\\"')
    linePlusRaw = f'{line[0:len(line)-1]}, "{RAW_JSON_FIELD_NAME}":"{rawJson}"' + '}'
    return linePlusRaw
    
rawAugmentedJsonRdd = sc.textFile('add file path here').map(addRawToJson)
df = spark.read.json(rawAugmentedJsonRdd)

这取的是原始json而不是重建它,不需要读取数据两次并合并它,也不需要你提前知道schema。

请注意,我的答案是在 python 中使用 pyspark,但应该很容易更改为使用 scala。

另请注意,该方法假定简单的单行 json 输入,并且在直接操作字符串之前不测试有效的 json,这对我的用例来说是可以接受的。

答案 2 :(得分:0)

您只需将to_json 内置函数.withColumn函数结合使用

val logs = sc.textFile("/Users/vgk/data/tiny.json")
val df = spark.read.json(logs)
import org.apache.spark.sql.functions._
df.withColumn("value", to_json(struct(df.columns.map(col): _*))).show(false)

更好,不要将sparkContext textFile用作rdd,只需使用sparkSession 将json文件读为

val df = spark.read.json("/Users/vgk/data/tiny.json")

import org.apache.spark.sql.functions._
df.withColumn("value", to_json(struct(df.columns.map(col): _*))).show(false)

你应该

+--------------+----------+---------+--------------+------------------------------------------------------------------------+
|colors        |eliminated|origin   |team          |value                                                                   |
+--------------+----------+---------+--------------+------------------------------------------------------------------------+
|gold,red,black|null      |Las Vegas|Golden Knights|{"colors":"gold,red,black","origin":"Las Vegas","team":"Golden Knights"}|
|null          |true      |San Jose |Sharks        |{"eliminated":"true","origin":"San Jose","team":"Sharks"}               |
|red,green,gold|null      |Minnesota|Wild          |{"colors":"red,green,gold","origin":"Minnesota","team":"Wild"}          |
+--------------+----------+---------+--------------+------------------------------------------------------------------------+