Pyspark Spliting列表和元组内的列表

时间:2017-08-15 13:55:59

标签: python apache-spark pyspark

我有以下

[('HOMICIDE', [('2017', 1)]), 
 ('DECEPTIVE PRACTICE', [('2017', 14), ('2016', 14), ('2015', 10), ('2013', 4), ('2014', 3)]), 
 ('ROBBERY', [('2017', 1)])]

如何转换为

[('HOMICIDE', ('2017', 1)), 
 ('DECEPTIVE PRACTICE', ('2015', 10)), 
 ('DECEPTIVE PRACTICE', ('2014', 3)), 
 ('DECEPTIVE PRACTICE', ('2017', 14)), 
 ('DECEPTIVE PRACTICE', ('2016', 14))]

当我尝试使用地图时,它的投掷为 “AttributeError:'list'对象没有属性'map'”

rdd = sc.parallelize([('HOMICIDE', [('2017', 1)]), ('DECEPTIVE PRACTICE', [('2017', 14), ('2016', 14), ('2015', 10), ('2013', 4), ('2014', 3)])])
y = rdd.map(lambda x : (x[0],tuple(x[1])))

2 个答案:

答案 0 :(得分:2)

相反,列表理解怎么样?

y = [(x[0], i) for x in rdd for i in x[1]]

返回

[('HOMICIDE', ('2017', 1)), ('DECEPTIVE PRACTICE', ('2017', 14)), ('DECEPTIVE PRACTICE', ('2016', 14)), ('DECEPTIVE PRACTICE', ('2015', 10)), ('DECEPTIVE PRACTICE', ('2013', 4)), ('DECEPTIVE PRACTICE', ('2014', 3))]

答案 1 :(得分:2)

maprdd上的方法而不是python列表,因此您需要首先对列表进行并行化,然后使用flatMap来展平内部列表:

rdd = sc.parallelize([('HOMICIDE', [('2017', 1)]), 
                      ('DECEPTIVE PRACTICE', [('2017', 14), ('2016', 14), ('2015', 10), ('2013', 4), ('2014', 3)]), 
                      ('ROBBERY', [('2017', 1)])])

rdd.flatMap(lambda x: [(x[0], y) for y in x[1]]).collect()

# [('HOMICIDE', ('2017', 1)), 
#  ('DECEPTIVE PRACTICE', ('2017', 14)), 
#  ('DECEPTIVE PRACTICE', ('2016', 14)), 
#  ('DECEPTIVE PRACTICE', ('2015', 10)), 
#  ('DECEPTIVE PRACTICE', ('2013', 4)), 
#  ('DECEPTIVE PRACTICE', ('2014', 3)), 
#  ('ROBBERY', ('2017', 1))]