寻找字符串是否包含不同数据帧中的子字符串

时间:2019-12-05 14:16:11

标签: python apache-spark pyspark pyspark-sql

我有2个数据框:

df_1的列id仅包含标准化的字符和数字==>,而id_no_normalized 示例:

 id_normalized   |  id_no_normalized
    -------------|-------------------
    ABC          |  A_B.C
    -------------|-------------------
    ERFD         |  E.R_FD
    -------------|-------------------
    12ZED        |   12_Z.ED

df_2的列name仅包含字符和数字==>已附加规范化的字符

示例:

name
----------------------------
googleisa12ZEDgoodnavigator
----------------------------
internetABCexplorer
----------------------------

如果id_normalized (dataset_1)中存在name (dataset_2),我想看看。如果找到它,则取id_no_normalized的值,并将其存储在dataset_2的新列中

预期结果:

   name                         |   result
    ----------------------------|----------
    googleisa12ZEDgoodnavigator |  12_Z.ED
    ----------------------------|----------
    internetABCexplorer         |  A_B.C
    ----------------------------|----------

我是使用以下代码完成的:

df_result = df_2.withColumn("id_no_normalized", dft_2.name.contains(df_1.id_normalized))
    return df_result.select("name", "id_normalized")

不起作用,因为它在df_2中找不到id_normalized

Second solution, it work only when I limited the output on 300 rows almost, but when I return all the data, is took many time running and not finish:

   df_1 = df_1.select("id_no_normalized").drop_duplicates()
df_1 = df_1.withColumn(
    "id_normalized",
    F.regexp_replace(F.col("id_no_normalized"), "[^a-zA-Z0-9]+", ""))
df_2 = df_2.select("name")
extract = F.expr('position(id_normalized IN name)>0')
result = df_1.join(df_2, extract)
return result

如何更正我的代码以解决它? 谢谢

1 个答案:

答案 0 :(得分:1)

我们可以使用交叉连接并在新的DF上应用UDF来解决此问题,但是我们再次需要确保它可以在大型数据集上使用。

from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType

data1 = [
 {"id_normalized":"ABC","id_no_normalized":"A_B.C"},
 {"id_normalized":"ERFD","id_no_normalized":"E.R_FD"},
 {"id_normalized":"12ZED","id_no_normalized":"12_Z.ED"}
]

data2 = [
 {"name": "googleisa12ZEDgoodnavigator"},
 {"name": "internetABCexplorer"}
]

df1 = spark.createDataFrame(data1, ["id_no_normalized", "id_normalized"])
df2 = spark.createDataFrame(data2, ["name"])

df3 = df1.crossJoin(df2)
search_for_udf = udf(lambda name,id_normalized: name.find(id_normalized), returnType=IntegerType())
df4 = df3.withColumn("contain", search_for_udf(df3["name"], df3["id_normalized"]))
df4.filter(df4["contain"] > -1).show()


>>> df4.filter(df4["contain"] > -1).show()
+----------------+-------------+--------------------+-------+
|id_no_normalized|id_normalized|                name|contain|
+----------------+-------------+--------------------+-------+
|           A_B.C|          ABC| internetABCexplorer|      8|
|         12_Z.ED|        12ZED|googleisa12ZEDgoo...|      9|
+----------------+-------------+--------------------+-------+

我相信可以使用一些火花技术来提高交叉连接的效率。