如何在pyspark提取到Azure Databricks中的数据帧的一个文件中读取多个嵌套的json对象?

时间:2019-10-22 05:49:20

标签: pyspark azure-databricks

我在ADLS中有一个.log文件,其中包含多个嵌套的Json对象,如下所示

{"EventType":3735091736,"Timestamp":"2019-03-19","Data":{"Id":"event-c2","Level":2,"MessageTemplate":"Test1","Properties":{"CorrId":"d69b7489","ActionId":"d0e2c3fd"}},"Id":"event-c20b9c7eac0808d6321106d901000000"}
{"EventType":3735091737,"Timestamp":"2019-03-18","Data":{"Id":"event-d2","Level":2,"MessageTemplate":"Test1","Properties":{"CorrId":"f69b7489","ActionId":"d0f2c3fd"}},"Id":"event-d20b9c7eac0808d6321106d901000000"}
{"EventType":3735091738,"Timestamp":"2019-03-17","Data":{"Id":"event-e2","Level":1,"MessageTemplate":"Test1","Properties":{"CorrId":"g69b7489","ActionId":"d0d2c3fd"}},"Id":"event-e20b9c7eac0808d6321106d901000000"}

需要在pyspark中读取以上多个嵌套的Json对象,并按如下所示转换为数据框

EventType    Timestamp       Data.[Id]  ..... [Data.Properties.CorrId]    [Data.Properties. ActionId]
3735091736   2019-03-19      event-c2   ..... d69b7489                    d0e2c3fd   
3735091737   2019-03-18      event-d2   ..... f69b7489                    d0f2c3fd
3735091738    2019-03-17     event-e2   ..... f69b7489                    d0d2c3fd

对于上述情况,我正在使用 Azure DataBricks中的ADLS,Pyspark

有人知道解决上述问题的一般方法吗?谢谢!

2 个答案:

答案 0 :(得分:0)

  1. 您可以先将其读入RDD。它将被视为字符串列表
  2. 您需要使用以下命令将json字符串转换为本地python数据类型 json.loads()
  3. 然后您可以将RDD转换为数据框,并且可以直接使用toDF()来推断架构
  4. 使用Flatten Spark Dataframe column of map/dictionary into multiple columns的答案,您可以将Data列分解为多列。鉴于您的Id列将是唯一的。请注意,爆炸将为地图类型中的每个条目返回keyvalue列。
  5. 您可以重复第4点以展开properties列。

解决方案:

import json

rdd = sc.textFile("demo_files/Test20191023.log")
df = rdd.map(lambda x: json.loads(x)).toDF()
df.show()
# +--------------------+----------+--------------------+----------+
# |                Data| EventType|                  Id| Timestamp|
# +--------------------+----------+--------------------+----------+
# |[MessageTemplate ...|3735091736|event-c20b9c7eac0...|2019-03-19|
# |[MessageTemplate ...|3735091737|event-d20b9c7eac0...|2019-03-18|
# |[MessageTemplate ...|3735091738|event-e20b9c7eac0...|2019-03-17|
# +--------------------+----------+--------------------+----------+

data_exploded = df.select('Id', 'EventType', "Timestamp", F.explode('Data'))\
    .groupBy('Id', 'EventType', "Timestamp").pivot('key').agg(F.first('value'))
# There is a duplicate Id column and might cause ambiguity problems
data_exploded.show()

# +--------------------+----------+----------+--------+-----+---------------+--------------------+
# |                  Id| EventType| Timestamp|      Id|Level|MessageTemplate|          Properties|
# +--------------------+----------+----------+--------+-----+---------------+--------------------+
# |event-c20b9c7eac0...|3735091736|2019-03-19|event-c2|    2|          Test1|{CorrId=d69b7489,...|
# |event-d20b9c7eac0...|3735091737|2019-03-18|event-d2|    2|          Test1|{CorrId=f69b7489,...|
# |event-e20b9c7eac0...|3735091738|2019-03-17|event-e2|    1|          Test1|{CorrId=g69b7489,...|
# +--------------------+----------+----------+--------+-----+---------------+--------------------+

答案 1 :(得分:0)

我能够通过以下代码读取数据。

from pyspark.sql.functions import *
DF = spark.read.json("demo_files/Test20191023.log") 

DF.select(col('Id'),col('EventType'),col('Timestamp'),col('Data.Id'),col('Data.Level'),col('Data.MessageTemplate'),
          col('Data.Properties.CorrId'),col('Data.Properties.ActionId'))\
  .show()```

***Result*** 

+--------------------+----------+----------+--------+-----+---------------+--------+--------+
|                  Id| EventType| Timestamp|      Id|Level|MessageTemplate|  CorrId|ActionId|
+--------------------+----------+----------+--------+-----+---------------+--------+--------+
|event-c20b9c7eac0...|3735091736|2019-03-19|event-c2|    2|          Test1|d69b7489|d0e2c3fd|
|event-d20b9c7eac0...|3735091737|2019-03-18|event-d2|    2|          Test1|f69b7489|d0f2c3fd|
|event-e20b9c7eac0...|3735091738|2019-03-17|event-e2|    1|          Test1|g69b7489|d0d2c3fd|
+--------------------+----------+----------+--------+-----+---------------+--------+--------+