如何将每行JSON解析为Spark 2 DataFrame的列?

时间:2018-02-06 09:25:58

标签: json scala apache-spark apache-spark-sql

在我的Spark(2.2)DataFrame中,每一行都是JSON:

df.head()
//output
//[{"key":"111","event_name":"page-visited","timestamp":1517814315}]

df.show()
//output
//+--------------+
//|         value|
//+--------------+
//|{"key":"111...|
//|{"key":"222...|

我想将每个JSON行传递给列,以获取此result

key   event_name     timestamp
111   page-visited   1517814315
...

我试过这种方法,但它没有给我预期的结果:

import org.apache.spark.sql.functions.from_json
import org.apache.spark.sql.types._

val schema = StructType(Seq(
     StructField("key", StringType, true), StructField("event_name", StringType, true), StructField("timestamp", IntegerType, true)
))

val result = df.withColumn("value", from_json($"value", schema))

result.printSchema()
root
 |-- value: struct (nullable = true)
 |    |-- key: string (nullable = true)
 |    |-- event_name: string (nullable = true)
 |    |-- timestamp: integer (nullable = true)

虽然它应该是:

result.printSchema()
root
 |-- key: string (nullable = true)
 |-- event_name: string (nullable = true)
 |-- timestamp: integer (nullable = true)

1 个答案:

答案 0 :(得分:2)

您最后可以使用select($"value.*")struct列的元素选择为单独的列

val result = df.withColumn("value", from_json($"value", schema)).select($"value.*")