如何通过比较另外两个条目中的条目来生成新的PySpark DataFrame?

时间:2017-09-29 21:11:23

标签: python pyspark pyspark-sql

我想搜索包含字符串字段的Pyspark DataFrame,并确定每个字符串中出现哪些关键字字符串。假设我有以下DataFrame个关键字:

+-----------+----------+
|       city|     state|
+-----------+----------+
|    Seattle|Washington|
|Los Angeles|California|
+-----------+----------+

我想在此DataFrame中搜索:

+----------------------------------------+------+
|body                                    |source|
+----------------------------------------+------+
|Seattle is in Washington.               |a     |
|Los Angeles is in California            |b     |
|Banana is a fruit                       |c     |
|Seattle is not in New Hampshire         |d     |
|California is home to Los Angeles       |e     |
|Seattle, California is not a real place.|f     |
+----------------------------------------+------+

我想创建一个新的DataFrame,用于标识每个来源中出现哪种类型的关键字。所以最终的结果是:

+-----------+------+-----+
|name       |source|type |
+-----------+------+-----+
|Seattle    |a     |city |
|Washington |a     |state|
|Los Angeles|b     |city |
|California |b     |state|
|Seattle    |d     |city |
|Los Angeles|e     |city |
|California |e     |state|
|Seattle    |f     |city |
|California |f     |state|
+-----------+------+-----+

我如何获得此结果?我可以使用join来隔离包含这些关键字的body字符串,但我不确定如何跟踪哪个特定关键字匹配并使用该信息创建新列。

1 个答案:

答案 0 :(得分:2)

首先,让我们创建和修改数据帧:

import pyspark.sql.functions as psf
keywords_df = sc.parallelize([["Seattle", "Washington"], ["Los Angeles", "California"]])\
    .toDF(["city", "state"])
keywords_df = keywords_df\
    .withColumn("struct", psf.explode(psf.array(
        psf.struct(psf.col("city").alias("word"), psf.lit("city").alias("type")), 
        psf.struct(psf.col("state").alias("word"), psf.lit("state").alias("type"))
    )))\
    .select("struct.*")
keywords_df.show()

    +-----------+-----+
    |       word| type|
    +-----------+-----+
    |    Seattle| city|
    | Washington|state|
    |Los Angeles| city|
    | California|state|
    +-----------+-----+

如果你的关键词不包含空格,你可以将split句子翻译成单词,那么你就可以exploded在每行上只得到一个单词。然后,您可以使用关键字数据框join。由于Los Angeles,情况并非如此。

text_df = sc.parallelize([["Seattle is in Washington.", "a"],["Los Angeles is in California", "b"],
                          ["Banana is a fruit", "c"],["Seattle is not in New Hampshire", "d"],
                          ["California is home to Los Angeles", "e"],["Seattle, California is not a real place.", "f"]])\
    .toDF(["body", "source"])

相反,我们将使用带有字符串contains条件的连接:

res = text_df.join(keywords_df, text_df.body.contains(keywords_df.word)).drop("body")
res.show()

    +------+-----------+-----+
    |source|       word| type|
    +------+-----------+-----+
    |     a|    Seattle| city|
    |     a| Washington|state|
    |     b|Los Angeles| city|
    |     b| California|state|
    |     d|    Seattle| city|
    |     f|    Seattle| city|
    |     e|Los Angeles| city|
    |     e| California|state|
    |     f| California|state|
    +------+-----------+-----+