from_json Pyspark SQL函数:找不到键的默认值?

时间:2019-04-11 09:26:06

标签: python apache-spark pyspark pyspark-sql

我照常使用from_json Pyspark SQL函数,例如:

>>> import pyspark.sql.types as t
>>> from pyspark.sql.functions import from_json
>>> df = sc.parallelize(['{"a":1}', '{"a":1, "b":2}', '{"a":1, "b":2, "c":3}']).toDF(t.StringType())
>>> df.show(3, False)
+---------------------+
|value                |
+---------------------+
|{"a":1}              |
|{"a":1, "b":2}       |
|{"a":1, "b":2, "c":3}|
+---------------------+

>>> schema = t.StructType([t.StructField("a", t.IntegerType()), t.StructField("b", t.IntegerType()), t.StructField("c", t.IntegerType())])
>>> df.withColumn("json", from_json("value", schema)).show(3, False)
+---------------------+---------+
|value                |json     |
+---------------------+---------+
|{"a":1}              |[1,,]    |
|{"a":1, "b":2}       |[1, 2,]  |
|{"a":1, "b":2, "c":3}|[1, 2, 3]|
+---------------------+---------+

请注意,JSON中不存在但在模式中指定的那些键具有经过解析的值null(或某种空值?)。

如何避免这种情况?我的意思是,有没有办法将默认值设置为from_json?还是我要在数据帧的后期处理中添加这样的默认值?

谢谢!

1 个答案:

答案 0 :(得分:1)

您可以尝试

df = self.spark.createDataFrame(['{"a":1}', '{"a":1, "b":2}', '{"a":1, "b":2, "c":3}'], StringType())

df.show(3, False)
df = df.withColumn("a", get_json_object("value", '$.a')) \
       .withColumn("b",when(get_json_object("value", '$.b').isNotNull(), get_json_object("value", '$.b')).otherwise(0)) \
       .withColumn("c",when(get_json_object("value", '$.c').isNotNull(), get_json_object("value", '$.c')).otherwise(0))

df.show(3, False)


+---------------------+
|value                |
+---------------------+
|{"a":1}              |
|{"a":1, "b":2}       |
|{"a":1, "b":2, "c":3}|
+---------------------+

+---------------------+---+---+---+
|value                |a  |b  |c  |
+---------------------+---+---+---+
|{"a":1}              |1  |0  |0  |
|{"a":1, "b":2}       |1  |2  |0  |
|{"a":1, "b":2, "c":3}|1  |2  |3  |
+---------------------+---+---+---+