PySpark - UnpicklingError:NEWOBJ类参数有NULL tp_new

时间:2017-06-06 17:12:03

标签: python apache-spark pyspark

当我执行下面的提示时,我收到了取消错误

rdd = sc.parallelize([('HOMICIDE', {'2017': 1}), 
('DECEPTIVE PRACTICE', {'2015': 2, '2017': 2, '2016': 8}), 
('ROBBERY', {'2016': 2})])

rdd.flatMapValues(dict.items).collect()

错误如下:在dictionay值上使用flatMapValues是否有任何问题

  File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
    command = pickleSer._read_with_length(infile)
  File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
    return self.loads(obj)
  File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
    return pickle.loads(obj)
UnpicklingError: NEWOBJ class argument has NULL tp_new
) [duplicate 3]
17/06/06 17:01:14 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 

1 个答案:

答案 0 :(得分:2)

rdd = sc.parallelize([('HOMICIDE', {'2017': 1}), 
                      ('DECEPTIVE PRACTICE', {'2015': 2, '2017': 2, '2016': 8}), 
                      ('ROBBERY', {'2016': 2})])

rdd.flatMapValues(lambda data: data.items()).collect()

[('HOMICIDE', ('2017', 1)),
 ('DECEPTIVE PRACTICE', ('2015', 2)),
 ('DECEPTIVE PRACTICE', ('2017', 2)),
 ('DECEPTIVE PRACTICE', ('2016', 8)),
 ('ROBBERY', ('2016', 2))]

dict.items是方法描述符。您必须提供一个函数来通知flatmap如何解压缩这些值。我通过将labmda函数传递给flatMap函数来做到这一点。