PySpark:根据列的有序串联创建一列

时间:2018-04-25 18:05:56

标签: python apache-spark pyspark

我在pyspark数据框中两个现有列的有序串联创建新列时出现问题,例如:

+------+------+--------+
| Col1 | Col2 | NewCol |
+------+------+--------+
| ORD  | DFW  | DFWORD |
| CUN  | MCI  | CUNMCI |
| LAX  | JFK  | JFKLAX |
+------+------+--------+

换句话说,我想抓住Col1和Col2,按字母顺序排列并连接它们。

有什么建议吗?

1 个答案:

答案 0 :(得分:4)

合并concat_wsarraysort_array

from pyspark.sql.functions import concat_ws, array, sort_array

df = spark.createDataFrame(
    [("ORD", "DFW"), ("CUN", "MCI"), ("LAX", "JFK")],
    ("Col1", "Col2"))

df.withColumn("NewCol", concat_ws("", sort_array(array("Col1", "Col2")))).show()
# +----+----+------+        
# |Col1|Col2|NewCol|
# +----+----+------+
# | ORD| DFW|DFWORD|
# | CUN| MCI|CUNMCI|
# | LAX| JFK|JFKLAX|
# +----+----+------+