在PySpark中,我有两个RDD,其结构为(键,列表列表):
input_rdd.take(2)
[(u'100',
[[u'36003165800', u'70309879', u'1']]),
(u'200',
[[u'5196352600', u'194837393', u'99']]) ]
output_rdd.take(2)
[(u'100',
[[u'875000', u'5959', u'1']]),
(u'300', [[u'16107000', u'12428', u'1']])]
现在我想要一个结果RDD(如下所示),它根据键对两个RDD进行分组,并按顺序给出输出作为元组(keys,(,))。任何输入中都没有输入密钥或者输出那个rdd的列表仍然是空的。
[(u'100',
([[[u'36003165800', u'70309879', u'1']]],
[[[u'875000', u'5959', u'1']]]),
(u'200',
([[[u'5196352600', u'194837393', u'99']]],
[])),
(u'300',([],[[[u'16107000', u'12428', u'1']]])
]
为了获得结果RDD,我使用
使用下面的代码 resultant=sc.parallelize(x, tuple(map(list, y))) for x,y in sorted(list(input_rdd.groupWith(output_rdd).collect()))
有没有办法可以删除.collect(),而是使用带有groupWith函数的.map()来获取Pyspark中相同的结果RDD?
答案 0 :(得分:0)
完整的外部联接给出:
input_rdd.fullOuterJoin(output_rdd).collect()
# [(u'200', ([[u'5196352600', u'194837393', u'99']], None)),
# (u'300', (None, [[u'16107000', u'12428', u'1']])),
# (u'100', ([[u'36003165800', u'70309879', u'1']], [[u'875000', u'5959', u'1']]))]
将None
替换为[]
:
input_rdd.fullOuterJoin(output_rdd).map(lambda x: (x[0], tuple(i if i is not None else [] for i in x[1]))).collect()
# [(u'200', ([[u'5196352600', u'194837393', u'99']], [])),
# (u'300', ([], [[u'16107000', u'12428', u'1']])),
# (u'100', ([[u'36003165800', u'70309879', u'1']], [[u'875000', u'5959', u'1']]))]