在dataFrame列值中添加单引号

时间:2019-11-06 07:25:50

标签: dataframe apache-spark pyspark databricks

DataFrame持有一列QUALIFY,其值如下所示。

QUALIFY
=================
ColA|ColB|ColC
ColA
ColZ|ColP

此列中的值由"|"分隔。我希望此列中的值像'ColA','ColB','ColC' ...

使用以下代码,我可以将|替换为,',。如何在值的开头和结尾添加单引号?

newDf = df_qualify.withColumn('QUALIFY2', regexp_replace('QUALIFY', "\\|", "\\','"))

2 个答案:

答案 0 :(得分:1)

faces.config.xml上拆分该列,然后将结果数组返回一个字符串:

|

我的样本输出框架:

import pyspark.sql.functions as F
import pyspark.sql.types as T

def str_list(x):
    return str(x).replace("[", "").replace("]", "")

str_udf = F.udf(str_list, T.StringType())

df = df.withColumn("arr_split", F.split(F.col("QUALIFY"), "\|")) # escape character
df = df.withColumn("QUALIFY2", str_udf(F.col("arr_split")))

答案 1 :(得分:1)

您的解决方案几乎已经存在-您只需要在开头和结尾添加单引号即可。您可以使用pyspark.sql.functions.concat来实现:

from pyspark.sql.functions import col, concat, lit, regexp_replace

df.withColumn(
    "QUALIFY2",
    concat(lit("'"), regexp_replace(col('QUALIFY'), r"\|", r"','"), lit("'"))
).show()
#+--------------+--------------------+
#|       QUALIFY|            QUALIFY2|
#+--------------+--------------------+
#|ColA|ColB|ColC|'ColA','ColB','ColC'|
#|          ColA|              'ColA'|
#|     ColZ|ColP|       'ColZ','ColP'|
#+--------------+--------------------+

或者,您可以避免使用正则表达式,并使用splitconcat_ws来实现正则表达式:

from pyspark.sql.functions import split, concat_ws
df.withColumn(
    "QUALIFY2", 
    concat(lit("'"), concat_ws("','", split("QUALIFY", "\|")), lit("'"))
).show()
#+--------------+--------------------+
#|       QUALIFY|            QUALIFY2|
#+--------------+--------------------+
#|ColA|ColB|ColC|'ColA','ColB','ColC'|
#|          ColA|              'ColA'|
#|     ColZ|ColP|       'ColZ','ColP'|
#+--------------+--------------------+