根据具有concat值的现有列添加新列Spark数据框

时间:2020-05-15 15:05:19

标签: apache-spark pyspark apache-spark-sql spark-streaming

我要根据以下条件在数据框中添加新列。

我的数据框是这样的:

my_string 

2020 test 

2020 prod 

2020 dev 

我的状况:

value1=subtract string after space from my_string

value2=subtract first four digit from my_string

If value 1 contains string 'test' then new_col=value2+"01"

If value 1 contains string 'prod' then new_col=value2+"kk"

If value 1 contains string 'dev' then new_col=value2+"ff"

我需要这样的结果:

my_string       |  new_col

2020 test        | 202001

2020 prod        | 2020kk 

2020 dev        | 2020ff

请有人帮我吗?

1 个答案:

答案 0 :(得分:1)

row_number 窗口功能与 monotonically_increasing_id()

from pyspark.sql import *
from pyspark.sql.functions import *
w = Window.orderBy(monotonically_increasing_id())
df.withColumn("new_col",concat(split(col("my_string")," ")[0], lpad(row_number().over(w),2,"0"))).show()

#+---------+-------+
#|my_string|new_col|
#+---------+-------+
#|2020 test| 202001|
#|2020 prod| 202002|
#| 2020 dev| 202003|
#+---------+-------+

UPDATE:

使用 when+otherwise 语句。

df.withColumn("dyn_col",when(lower(split(col("my_string")," ")[1]) =="prod","kk").\
when(lower(split(col("my_string")," ")[1]) =="dev","ff").\
when(lower(split(col("my_string")," ")[1]) =="test","01").\
otherwise("null")).\
withColumn("new_col",concat(split(col("my_string")," ")[0], col("dyn_col"))).\
drop("dyn_col").\
show()
#+---------+-------+
#|my_string|new_col|
#+---------+-------+
#|2020 test| 202001|
#|2020 prod| 2020kk|
#| 2020 dev| 2020ff|
#+---------+-------+

In Scala:

df.withColumn("dyn_col",when(lower(split(col("my_string")," ")(1)) ==="prod","kk").
when(lower(split(col("my_string")," ")(1)) ==="dev","ff").
when(lower(split(col("my_string")," ")(1)) ==="test","01").
otherwise("null")).
withColumn("new_col",concat(split(col("my_string")," ")(0), col("dyn_col"))).
drop("dyn_col").
show()

//+---------+-------+
//|my_string|new_col|
//+---------+-------+
//|2020 test| 202001|
//|2020 prod| 2020kk|
//| 2020 dev| 2020ff|
//+---------+-------+