使用pyspark截断字符串

时间:2017-12-16 15:41:16

标签: python apache-spark pyspark apache-spark-sql

我目前正在使用Databricks处理PySpark,我正在寻找一种截断字符串的方法,就像excel right函数一样。 例如,我想将DataFrame 8841673_3中的ID列更改为8841673

有人知道我该怎么做吗?

2 个答案:

答案 0 :(得分:2)

regexp_extract的正则表达式:

from pyspark.sql.functions import regexp_extract

df = spark.createDataFrame([("8841673_3", )], ("id", ))

df.select(regexp_extract("id", "^(\d+)_.*", 1)).show()
# +--------------------------------+
# |regexp_extract(id, ^(\d+)_.*, 1)|
# +--------------------------------+
# |                         8841673|
# +--------------------------------+

regexp_replace

from pyspark.sql.functions import regexp_replace

df.select(regexp_replace("id", "_.*$", "")).show()
# +--------------------------+
# |regexp_replace(id, _.*$, )|
# +--------------------------+
# |                   8841673|
# +--------------------------+

或只是split

from pyspark.sql.functions import split

df.select(split("id", "_")[0]).show()
# +---------------+
# |split(id, _)[0]|
# +---------------+
# |        8841673|
# +---------------+

答案 1 :(得分:0)

您可以使用pyspark.sql.Column.substr方法:

import pyspark.sql.functions as F

def left(x, n):
    return x.substr(0, n)

def right(x, n):
    x_len = F.length(x)
    return x.substr(x_len - n, x_len)