我在spark数据帧中有两个ArrayType(StringType())
列,我想将这两个列逐元素连接:
输入:
+-------------+-------------+
|col1 |col2 |
+-------------+-------------+
|['a','b'] |['c','d'] |
|['a','b','c']|['e','f','g']|
+-------------+-------------+
输出:
+-------------+-------------+----------------+
|col1 |col2 |col3 |
+-------------+-------------+----------------+
|['a','b'] |['c','d'] |['ac', 'bd'] |
|['a','b','c']|['e','f','g']|['ae','bf','cg']|
+-------------+----------- -+----------------+
谢谢。
答案 0 :(得分:4)
对于Spark 2.4+,您可以使用transform
函数,如下所示:
col3_expr = "transform(col1, (x, i) -> concat(x, col2[i]))"
df.withColumn("col3", expr(col3_expr)).show()
transform
函数将第一数组列col1
作为参数,对其元素进行迭代,并应用lambda函数(x, i) -> concat(x, col2[i])
,其中x
是实际元素,而{{1 }}其索引用于从数组i
中获取相应的元素。
礼物:
col2
或更高级的zip_with
函数更简单:
+------+------+--------+
| col1| col2| col3|
+------+------+--------+
|[a, b]|[c, d]|[ac, bd]|
+------+------+--------+
答案 1 :(得分:0)
它不会真正扩展,但是您可以在每个数组中获得0th
和1st
条目,然后说col3
是a[0] + b[0]
然后是{{1} }。
将所有4个条目设置为单独的值,然后将它们组合输出。
答案 2 :(得分:0)
这是一个通用答案。只需查看结果即可。 2个相等大小的数组,因此两个元素均为n个元素。
from pyspark.sql.functions import *
from pyspark.sql.types import *
# Gen in this case numeric data, etc. 3 rows with 2 arrays of varying length, but both the same length as in your example
df = spark.createDataFrame([ ( list([x,x+1,4, x+100]), 4, list([x+100,x+200,999,x+500]) ) for x in range(3)], ['array1', 'value1', 'array2'] )
num_array_elements = len(df.select("array1").first()[0])
# concat
df2 = df.select(([ concat(col("array1")[i], col("array2")[i]) for i in range(num_array_elements)]))
df2.withColumn("res", array(df2.schema.names)).show(truncate=False)
返回:
答案 3 :(得分:0)
这里是替代答案,可以用于更新的非原创问题。使用array和array_except演示此类方法的用法。接受的答案更加优雅。
from pyspark.sql.functions import *
from pyspark.sql.types import *
# Arbitrary max number of elements to apply array over, need not broadcast such a small amount of data afaik.
max_entries = 5
# Gen in this case numeric data, etc. 3 rows with 2 arrays of varying length,but per row constant length.
dfA = spark.createDataFrame([ ( list([x,x+1,4, x+100]), 4, list([x+100,x+200,999,x+500]) ) for x in range(3)], ['array1', 'value1', 'array2'] ).withColumn("s",size(col("array1")))
dfB = spark.createDataFrame([ ( list([x,x+1]), 4, list([x+100,x+200]) ) for x in range(5)], ['array1', 'value1', 'array2'] ).withColumn("s",size(col("array1")))
df = dfA.union(dfB)
# concat the array elements which are variable in size up to a max amount.
df2 = df.select(( [concat(col("array1")[i], col("array2")[i]) for i in range(max_entries)]))
df3 = df2.withColumn("res", array(df2.schema.names))
# Get results but strip out null entires from array.
df3.select(array_except(df3.res, array(lit(None)))).show(truncate=False)
无法获取要传递到范围内的列的s值。
这将返回:
+------------------------------+
|array_except(res, array(NULL))|
+------------------------------+
|[0100, 1200, 4999, 100500] |
|[1101, 2201, 4999, 101501] |
|[2102, 3202, 4999, 102502] |
|[0100, 1200] |
|[1101, 2201] |
|[2102, 3202] |
|[3103, 4203] |
|[4104, 5204] |
+------------------------------+