我有两个数据帧:df1
+---+-----------------+
|id1| items1|
+---+-----------------+
| 0| [B, C, D, E]|
| 1| [E, A, C]|
| 2| [F, A, E, B]|
| 3| [E, G, A]|
| 4| [A, C, E, B, D]|
+---+-----------------+
和df2
:
+---+-----------------+
|id2| items2|
+---+-----------------+
|001| [A, C]|
|002| [D]|
|003| [E, A, B]|
|004| [B, D, C]|
|005| [F, B]|
|006| [G, E]|
+---+-----------------+
我想基于result_array
中的值创建一个指标向量(在df1
中的新列items2
中)。向量的长度应与df2
中的行数相同(在此示例中,它应具有6个元素)。如果items1
中的行包含items2
对应行中的所有元素,则其元素的值应为1.0,否则为0.0。结果应如下所示:
+---+-----------------+-------------------------+
|id1| items1| result_array|
+---+-----------------+-------------------------+
| 0| [B, C, D, E]|[0.0,1.0,0.0,1.0,0.0,0.0]|
| 1| [E, A, C]|[1.0,0.0,0.0,0.0,0.0,0.0]|
| 2| [F, A, E, B]|[0.0,0.0,1.0,0.0,1.0,0.0]|
| 3| [E, G, A]|[0.0,0.0,0.0,0.0,0.0,1.0]|
| 4| [A, C, E, B, D]|[1.0,1.0,1.0,1.0,0.0,0.0]|
+---+-----------------+-------------------------+
例如,在第0行中,第二个值是1.0,因为[D]是[B,C,D,E]的子集,而第四个值是1.0,因为[B,D,C]是B的子集。 [B,C,D,E]。 df2
中的所有其他项目组都不是[B,C,D,E]的子集,因此其指标值为0.0。
我尝试使用collect()在items2
中创建所有项目组的列表,然后应用udf,但是我的数据太大(超过1000万行)。
答案 0 :(得分:1)
您可以这样进行
import pyspark.sql.functions as F
from pyspark.sql.types import *
df1 = sql.createDataFrame([
(0,['B', 'C', 'D', 'E']),
(1,['E', 'A', 'C']),
(2,['F', 'A', 'E', 'B']),
(3,['E', 'G', 'A']),
(4,['A', 'C', 'E', 'B', 'D'])],
['id1','items1'])
df2 = sql.createDataFrame([
(001,['A', 'C']),
(002,['D']),
(003,['E', 'A', 'B']),
(004,['B', 'D', 'C']),
(005,['F', 'B']),
(006,['G', 'E'])],
['id2','items2'])
哪个给您数据框,
+---+---------------+
|id1| items1|
+---+---------------+
| 0| [B, C, D, E]|
| 1| [E, A, C]|
| 2| [F, A, E, B]|
| 3| [E, G, A]|
| 4|[A, C, E, B, D]|
+---+---------------+
+---+---------+
|id2| items2|
+---+---------+
| 1| [A, C]|
| 2| [D]|
| 3|[E, A, B]|
| 4|[B, D, C]|
| 5| [F, B]|
| 6| [G, E]|
+---+---------+
现在,crossJoin
这两个数据帧为您提供df1
与df2
的笛卡尔积。然后,在groupby
上'items1'
并应用udf
来获得'result_array'
。
get_array_udf = F.udf(lambda x,y:[1.0 if set(z) < set(x) else 0.0 for z in y], ArrayType(FloatType()))
df = df1.crossJoin(df2)\
.groupby(['id1', 'items1']).agg(F.collect_list('items2').alias('items2'))\
.withColumn('result_array', get_array_udf('items1', 'items2')).drop('items2')
df.show()
这将为您提供输出,
+---+---------------+------------------------------+
|id1|items1 |result_array |
+---+---------------+------------------------------+
|1 |[E, A, C] |[1.0, 0.0, 0.0, 0.0, 0.0, 0.0]|
|0 |[B, C, D, E] |[0.0, 1.0, 0.0, 1.0, 0.0, 0.0]|
|4 |[A, C, E, B, D]|[1.0, 1.0, 1.0, 1.0, 0.0, 0.0]|
|3 |[E, G, A] |[0.0, 0.0, 0.0, 0.0, 0.0, 1.0]|
|2 |[F, A, E, B] |[0.0, 0.0, 1.0, 0.0, 1.0, 0.0]|
+---+---------------+------------------------------+