将列表转换为pyspark中的数据框列

时间:2019-10-21 02:15:52

标签: pyspark

我有一个数据框,其中一个字符串类型列包含要爆炸的项目列表,并使其成为父数据框的一部分。我该怎么做?

以下是创建示例数据框的代码:

from pyspark.sql import Row
from collections import OrderedDict

def convert_to_row(d: dict) -> Row:
    return Row(**OrderedDict(sorted(d.items())))

df=sc.parallelize([{"arg1": "first", "arg2": "John", "arg3" : '[{"name" : "click", "datetime" : "1570103345039", "event" : "entry" }, {"name" : "drag", "datetime" : "1580133345039", "event" : "exit" }]'},{"arg1": "second", "arg2": "Joe", "arg3": '[{"name" : "click", "datetime" : "1670105345039", "event" : "entry" }, {"name" : "drop", "datetime" : "1750134345039", "event" : "exit" }]'},{"arg1": "third", "arg2": "Jane", "arg3" : '[{"name" : "click", "datetime" : "1580105245039", "event" : "entry" }, {"name" : "drop", "datetime" : "1650134345039", "event" : "exit" }]'}]) \
    .map(convert_to_row).toDF()

运行此代码将创建一个数据框,如下所示:

+------+----+--------------------+
|  arg1|arg2|                arg3|
+------+----+--------------------+
| first|John|[{"name" : "click...|
|second| Joe|[{"name" : "click...|
| third|Jane|[{"name" : "click...|
+------+----+--------------------+

arg3列包含一个列表,我想将其分解为详细列。我希望数据框如下:

arg1 | arg2 | arg3 |名称|日期时间|事件

我该如何实现?

1 个答案:

答案 0 :(得分:3)

您需要在from_json函数中为模式指定数组:

from pyspark.sql.functions import explode, from_json

schema = 'array<struct<name:string,datetime:string,event:string>>'

df.withColumn('data', explode(from_json('arg3', schema))) \
  .select(*df.columns, 'data.*') \
  .show()                     
+------+----+--------------------+-----+-------------+-----+
|  arg1|arg2|                arg3| name|     datetime|event|
+------+----+--------------------+-----+-------------+-----+
| first|John|[{"name" : "click...|click|1570103345039|entry|
| first|John|[{"name" : "click...| drag|1580133345039| exit|
|second| Joe|[{"name" : "click...|click|1670105345039|entry|
|second| Joe|[{"name" : "click...| drop|1750134345039| exit|
| third|Jane|[{"name" : "click...|click|1580105245039|entry|
| third|Jane|[{"name" : "click...| drop|1650134345039| exit|
+------+----+--------------------+-----+-------------+-----+

注意:如果您的Spark版本不支持架构的 simpleString 格式,请尝试以下操作:

from pyspark.sql.types import ArrayType, StringType, StructType, StructField

schema = ArrayType(
    StructType([
          StructField('name',StringType())
        , StructField('datetime',StringType())
        , StructField('event',StringType())
    ])
)