数据帧上的Pyspark余弦相似度

时间:2019-10-02 14:55:02

标签: python apache-spark pyspark user-defined-functions

我有一个PySpark DataFrame df1,看起来像:

Customer1  Customer2  v_cust1   v_cust2
   1           2         0.9      0.1
   1           3         0.3      0.4
   1           4         0.2      0.9
   2           1         0.8      0.8

我要考虑两个数据帧的余弦相似度。还有类似的东西

Customer1  Customer2  v_cust1   v_cust2  cosine_sim
   1           2         0.9      0.1       0.1
   1           3         0.3      0.4       0.9
   1           4         0.2      0.9       0.15
   2           1         0.8      0.8       1

我有一个python函数,可以接收像这样的数字/数字数组:

def cos_sim(a, b):
    return float(np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b)))

如何使用udf在数据框中创建cosine_sim列? 我可以将几列而不是一列传递给udf cosine_sim函数吗?

1 个答案:

答案 0 :(得分:1)

如果您想使用pandas_udf,效率会更高。

它在矢量化运算中的性能要比spark udfs好:Introducing Pandas UDF for PySpark

from pyspark.sql.functions import PandasUDFType, pandas_udf
import pyspark.sql.functions as F

# Names of columns 
a, b = "v_cust1", "v_cust2"
cosine_sim_col = "cosine_sim"

# Make a reserved column to fill the values since the constraint of pandas_udf
# is that the input schema and output schema has to remain the same.
df = df.withColumn("cosine_sim", F.lit(1.0).cast("double"))

@pandas_udf(df.schema, PandasUDFType.GROUPED_MAP)
def cos_sim(df):
    df[cosine_sim_col] = float(np.dot(df[a], df[b]) / (np.linalg.norm(df[a]) * np.linalg.norm(df[b])))
    return df


# Assuming that you want to groupby Customer1 and Customer2 for arrays
df2 = df.groupby(["Customer1", "Customer2"]).apply(cos_sim)

# But if you want to send entire columns then make a column with the same 
# value in all rows and group by it. For e.g.:
df3 = df.withColumn("group", F.lit("group_a")).groupby("group").apply(cos_sim)