Pyspark分裂柱

时间:2017-10-19 17:44:00

标签: pyspark

from pyspark.sql import Row, functions as F
row = Row("UK_1","UK_2","Date","Cat",'Combined')
agg = ''
agg = 'Cat'
tdf = (sc.parallelize
    ([
        row(1,1,'12/10/2016',"A",'Water^World'),
        row(1,2,None,'A','Sea^Born'),
        row(2,1,'14/10/2016','B','Germ^Any'),
        row(3,3,'!~2016/2/276','B','Fin^Land'),
        row(None,1,'26/09/2016','A','South^Korea'),
        row(1,1,'12/10/2016',"A",'North^America'),
        row(1,2,None,'A','South^America'),
        row(2,1,'14/10/2016','B','New^Zealand'),
        row(None,None,'!~2016/2/276','B','South^Africa'),
        row(None,1,'26/09/2016','A','Saudi^Arabia')
        ]).toDF())
cols = F.split(tdf['Combined'], '^')
tdf = tdf.withColumn('column1', cols.getItem(0))
tdf = tdf.withColumn('column2', cols.getItem(1))
tdf.show(truncate = False  )

以上是我的示例代码。

由于某种原因,它没有按^字符拆分列。

有什么建议吗?

2 个答案:

答案 0 :(得分:3)

模式是正则表达式,请参阅split;并且^是一个匹配 regex 中字符串开头的锚点,为了按字面意思匹配,你需要将其转义:

cols = F.split(tdf['Combined'], r'\^')
tdf = tdf.withColumn('column1', cols.getItem(0))
tdf = tdf.withColumn('column2', cols.getItem(1))
tdf.show(truncate = False)

+----+----+------------+---+-------------+-------+-------+
|UK_1|UK_2|Date        |Cat|Combined     |column1|column2|
+----+----+------------+---+-------------+-------+-------+
|1   |1   |12/10/2016  |A  |Water^World  |Water  |World  |
|1   |2   |null        |A  |Sea^Born     |Sea    |Born   |
|2   |1   |14/10/2016  |B  |Germ^Any     |Germ   |Any    |
|3   |3   |!~2016/2/276|B  |Fin^Land     |Fin    |Land   |
|null|1   |26/09/2016  |A  |South^Korea  |South  |Korea  |
|1   |1   |12/10/2016  |A  |North^America|North  |America|
|1   |2   |null        |A  |South^America|South  |America|
|2   |1   |14/10/2016  |B  |New^Zealand  |New    |Zealand|
|null|null|!~2016/2/276|B  |South^Africa |South  |Africa |
|null|1   |26/09/2016  |A  |Saudi^Arabia |Saudi  |Arabia |
+----+----+------------+---+-------------+-------+-------+

答案 1 :(得分:0)

尝试使用 '\^'。此外,当您使用 '.' 点作为分母时,情况也是一样。