PySpark:在RDD中使用Object

时间:2015-11-10 20:30:20

标签: python apache-spark pyspark

我目前正在学习Python,并希望将其应用于/使用Spark。 我有这个非常简单(也没用)的脚本:

import sys
from pyspark import SparkContext

class MyClass:
    def __init__(self, value):
        self.v = str(value)

    def addValue(self, value):
        self.v += str(value)

    def getValue(self):
        return self.v

if __name__ == "__main__":
    if len(sys.argv) != 1:
        print("Usage CC")
        exit(-1)

    data = [1, 2, 3, 4, 5, 2, 5, 3, 2, 3, 7, 3, 4, 1, 4]
    sc = SparkContext(appName="WordCount")
    d = sc.parallelize(data)
    inClass = d.map(lambda input: (input, MyClass(input)))
    reduzed = inClass.reduceByKey(lambda a, b: a.addValue(b.getValue))
    print(reduzed.collect())

使用

执行时
  

spark-submit CustomClass.py

..以下错误是thorwn(输出缩短):

Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
    process()
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 133, in dump_stream
    for obj in iterator:
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1728, in add_shuffle_key
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 415, in dumps
    return pickle.dumps(obj, protocol)
PicklingError: Can't pickle __main__.MyClass: attribute lookup __main__.MyClass failed
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)...

给我发表声明

  

PicklingError: Can't pickle __main__.MyClass: attribute lookup __main__.MyClass failed

似乎很重要。这意味着类实例不能被序列化,对吧? 你知道如何解决这个问题吗?

谢谢和问候

1 个答案:

答案 0 :(得分:14)

有很多问题:

  • 如果您将MyClass放在单独的文件中,则可以进行腌制。这是许多Python使用pickle的常见问题。移动MyClass和使用from myclass import MyClass可以很容易地解决这个问题。通常情况dill可以解决这些问题(如import dill as pickle中所述),但这对我来说并不适用。
  • 一旦解决了这个问题,自从调用addValue返回None(不返回),而不是MyClass的实例后,您的reduce就无法正常工作。您需要更改addValue以返回self
  • 最后,lambda需要致电getValue,所以应该a.addValue(b.getValue())

合: myclass.py

class MyClass:
    def __init__(self, value):
        self.v = str(value)

    def addValue(self, value):
        self.v += str(value)
        return self

    def getValue(self):
        return self.v

main.py

import sys
from pyspark import SparkContext
from myclass import MyClass

if __name__ == "__main__":
    if len(sys.argv) != 1:
        print("Usage CC")
        exit(-1)

    data = [1, 2, 3, 4, 5, 2, 5, 3, 2, 3, 7, 3, 4, 1, 4]
    sc = SparkContext(appName="WordCount")
    d = sc.parallelize(data)
    inClass = d.map(lambda input: (input, MyClass(input)))
    reduzed = inClass.reduceByKey(lambda a, b: a.addValue(b.getValue()))
    print(reduzed.collect())