使用其他DataFrame中的关键字过滤Spark DataFrame

时间:2017-09-28 22:14:25

标签: python apache-spark pyspark pyspark-sql

我有一个大型数据集的新闻文章加载到PySpark DataFrame中。我有兴趣将DataFrame过滤到包含正文中某些感兴趣的单词的文章集。目前关键字列表很小,但我想将它们存储在DataFrame中,因为该列表将来可能会扩展。请考虑以下小例子:

from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()

article_df = [{'source': 'a', 'body': 'Seattle is in Washington.'},
                {'source': 'b', 'body': 'Los Angeles is in California'},
                {'source': 'a', 'body': 'Banana is a fruit'}]
article_data = spark.createDataFrame(article_data)

keyword_data = [{'city': 'Seattle', 'state': 'Washington'},
                {'city': 'Los Angeles', 'state': 'California'}]
keyword_df = spark.createDataFrame(keyword_data)

这为我们提供了以下DataFrame:

+--------------------+------+
|                body|source|
+--------------------+------+
|Seattle is in Was...|     a|
|Los Angeles is in...|     b|
|   Banana is a fruit|     a|
+--------------------+------+

+-----------+----------+
|       city|     state|
+-----------+----------+
|    Seattle|Washington|
|Los Angeles|California|
+-----------+----------+

作为第一遍,我想过滤掉article_df,以便它只包含body字符串包含keyword_df['city']中的任何字符串的文章。我还希望将其过滤到包含keyword_df['city']字符串和keyword_df['state']中相应条目(同一行)的文章。我怎么能做到这一点?

我已设法通过手动定义的关键字列表来执行此操作:

from pyspark.sql.functions import udf
from pyspark.sql.types import BooleanType
def city_filter(x):
    cities = ['Seattle', 'Los Angeles']
    x = x.lower()
    return any(s.lower() in x for s in cities)
filterUDF = udf(city_filter, BooleanType())

然后article_df.filter(filterUDF(article_df.body)).show()给出了所需的结果:

+--------------------+------+
|                body|source|
+--------------------+------+
|Seattle is in Was...|     a|
|Los Angeles is in...|     b|
+--------------------+------+

如何在不必手动定义关键字列表(或关键字对的元组)的情况下实现此过滤器?我应该为此使用UDF吗?

1 个答案:

答案 0 :(得分:0)

您可以使用自定义表达式的leftsemi联接来实现它,例如:

body_contains_city = expr('body like concat("%", city, "%")')
article_df.join(keyword_df, body_contains_city, 'leftsemi').show()