Apache-Spark如何使用类中的方法

时间:2015-07-07 21:02:54

标签: python class methods apache-spark

我正在学习Apache-Spark。仔细阅读Spark教程后,我了解如何将Python函数传递给Apache-Spark来处理RDD数据集。但是现在我仍然不知道Apache-Spark如何使用类中的方法。例如,我的代码如下:

import numpy as np
import copy
from pyspark import SparkConf, SparkContext

class A():
    def __init__(self, n):
        self.num = n

class B(A):
    ### Copy the item of class A to B.
    def __init__(self, A):
        self.num = copy.deepcopy(A.num)

    ### Print out the item of B
    def display(self, s):
        print s.num
        return s

def main():
    ### Locally run an application "test" using Spark.
    conf = SparkConf().setAppName("test").setMaster("local[2]")

    ### Setup the Spark configuration.
    sc = SparkContext(conf = conf)

    ### "data" is a list to store a list of instances of class A. 
    data = []
    for i in np.arange(5):
        x = A(i)
        data.append(x)

    ### "lines" separate "data" in Spark.  
    lines = sc.parallelize(data)

    ### Parallelly creates a list of instances of class B using
    ### Spark "map".
    temp = lines.map(B)

    ### Now I got the error when it runs the following code:
    ### NameError: global name 'display' is not defined.
    temp1 = temp.map(display)

if __name__ == "__main__":
    main()

实际上,我使用上面的代码使用class B并行生成temp = lines.map(B)的实例列表。之后,我做了temp1 = temp.map(display),因为我想并行打印出class B实例列表中的每个项目。但现在出现错误:NameError: global name 'display' is not defined.我想知道如何修复错误,如果我仍然使用Apache-Spark并行计算。如果有人帮助我,我真的很感激。

1 个答案:

答案 0 :(得分:4)

结构

.
├── ab.py
└── main.py

main.py

import numpy as np
from pyspark import SparkConf, SparkContext
import os
from ab import A, B

def main():
    ### Locally run an application "test" using Spark.
    conf = SparkConf().setAppName("test").setMaster("local[2]")

    ### Setup the Spark configuration.
    sc = SparkContext(
            conf = conf, pyFiles=[
               os.path.join(os.path.abspath(os.path.dirname(__file__)), 'ab.py')]
    ) 

    data = []
    for i in np.arange(5):
        x = A(i)
        data.append(x)

    lines = sc.parallelize(data)
    temp = lines.map(B)

    temp.foreach(lambda x: x.display()) 

if __name__ == "__main__":
    main()

ab.py

import copy

class A():
    def __init__(self, n):
        self.num = n

class B(A):
    ### Copy the item of class A to B.
    def __init__(self, A):
        self.num = copy.deepcopy(A.num)

    ### Print out the item of B
    def display(self):
        print self.num

评论:

  • 再一次 - 打印是一个坏主意。忽略Spark架构很可能会成为程序中的瓶颈。
  • 如果您需要诊断输出,请考虑记录或收集样本并在本地检查:for x in rdd.sample(False, 0.001).collect(): x.display()
  • 副作用使用foreach代替map
  • 我修改了display方法。我不确定在这种情况下应该s