PySpark - 将所有数据帧列字符串拆分为数组

时间:2018-02-27 19:48:17

标签: apache-spark pyspark

在PySpark中,如何将所有列中的字符串拆分为字符串列表?

a = [('a|q|e','d|r|y'),('j|l|f','m|g|j')]
df = sc.createDataFrame(a,['col1','col2'])

+-----+-----+
| col1| col2|
+-----+-----+
|a|q|e|d|r|y|
|j|l|f|m|g|j|
+-----+-----+

预期输出:

+---------+---------+
|     col1|     col2|
+---------+---------+
|[a, q, e]|[d, r, y]|
|[j, l, f]|[m, g, j]|
+---------+---------+

我可以使用withColumn一次执行单列,但不能使用具有动态列数的吸引人的解决方案。

from pyspark.sql.functions import col, split    
outDF = df.withColumn("col1", split(col("col1"), "\\|").alias("col1"))

1 个答案:

答案 0 :(得分:3)

一种选择是首先创建列表达式列表,然后使用 varargs 语法来利用select方法:

from pyspark.sql.functions import col, split 
cols = ['col1', 'col2']                                               # columns to split
col_exprs = [split(col(x), "\\|").alias(x) for x in cols]
df.select(*col_exprs).show()
+---------+---------+
|     col1|     col2|
+---------+---------+
|[a, q, e]|[d, r, y]|
|[j, l, f]|[m, g, j]|
+---------+---------+

使用reduce functoolswithColumn动态创建新列的另一个选项:

from functools import reduce
reduce(
    lambda df, colname: df.withColumn(colname, split(col(colname), "\\|").alias(colname)), 
    cols, 
    df
).show()
+---------+---------+
|     col1|     col2|
+---------+---------+
|[a, q, e]|[d, r, y]|
|[j, l, f]|[m, g, j]|
+---------+---------+

reduce(lambda df, colname: df.withColumn(colname, split(col(colname), "\\|").alias(colname)), cols, df).explain()
# == Physical Plan ==
# *Project [split(col1#0, \|) AS col1#76, split(col2#1, \|) AS col2#81]
# +- Scan ExistingRDD[col1#0,col2#1]