来自列值和正则表达式

时间:2018-03-28 15:20:03

标签: regex pyspark pattern-matching callable-object

您好我有2列数据框:

+----------------------------------------+----------+
|                  Text                  | Key_word |
+----------------------------------------+----------+
| First random text tree cheese cat      | tree     |
| Second random text apple pie three     | text     |
| Third random text burger food brain    | brain    |
| Fourth random text nothing thing chips | random   |
+----------------------------------------+----------+

我想生成第3列,其中一个单词出现在文本中的key_word之前。

+----------------------------------------+----------+-------------------+--+
|                  Text                  | Key_word | word_bef_key_word |  |
+----------------------------------------+----------+-------------------+--+
| First random text tree cheese cat      | tree     | text              |  |
| Second random text apple pie three     | text     | random            |  |
| Third random text burger food brain    | brain    | food              |  |
| Fourth random text nothing thing chips | random   | Fourth            |  |
+----------------------------------------+----------+-------------------+--+

我试过了,但它没有用

df2=df1.withColumn('word_bef_key_word',regexp_extract(df1.Text,('\\w+)'df1.key_word,1))

以下是创建数据框

示例的代码
df = sqlCtx.createDataFrame(
    [
        ('First random text tree cheese cat' , 'tree'),
        ('Second random text apple pie three', 'text'),
        ('Third random text burger food brain' , 'brain'),
        ('Fourth random text nothing thing chips', 'random')
    ],
    ('Text', 'Key_word') 
)

1 个答案:

答案 0 :(得分:4)

<强>更新

您还可以do this without a udf pyspark.sql.functions.exprcolumn values as a parameter传递给pyspark.sql.functions.regexp_extract

from pyspark.sql.functions import expr

df = df.withColumn(
    'word_bef_key_word', 
    expr(r"regexp_extract(Text, concat('\\w+(?= ', Key_word, ')'), 0)")
)
df.show(truncate=False)
#+--------------------------------------+--------+-----------------+
#|Text                                  |Key_word|word_bef_key_word|
#+--------------------------------------+--------+-----------------+
#|First random text tree cheese cat     |tree    |text             |
#|Second random text apple pie three    |text    |random           |
#|Third random text burger food brain   |brain   |food             |
#|Fourth random text nothing thing chips|random  |Fourth           |
#+--------------------------------------+--------+-----------------+

原始答案

执行此操作的一种方法是使用udf执行正则表达式:

import re
from pyspark.sql.functions import udf

def get_previous_word(text, key_word):
    matches = re.findall(r'\w+(?= {kw})'.format(kw=key_word), text)
    return matches[0] if matches else None

get_previous_word_udf = udf(
    lambda text, key_word: get_previous_word(text, key_word),
    StringType()
)

df = df.withColumn('word_bef_key_word', get_previous_word_udf('Text', 'Key_word'))
df.show(truncate=False)
#+--------------------------------------+--------+-----------------+
#|Text                                  |Key_word|word_bef_key_word|
#+--------------------------------------+--------+-----------------+
#|First random text tree cheese cat     |tree    |text             |
#|Second random text apple pie three    |text    |random           |
#|Third random text burger food brain   |brain   |food             |
#|Fourth random text nothing thing chips|random  |Fourth           |
#+--------------------------------------+--------+-----------------+

正则表达式模式'\w+(?= {kw})'.format(kw=key_word)表示匹配单词后跟空格和key_word。如果有多个匹配,我们将返回第一个匹配。如果没有匹配项,则函数返回None