我正在学习Apache-Spark。仔细阅读Spark教程后,我了解如何将Python函数传递给Apache-Spark来处理RDD数据集。但是现在我仍然不知道Apache-Spark如何使用类中的方法。例如,我的代码如下:
import numpy as np
import copy
from pyspark import SparkConf, SparkContext
class A():
def __init__(self, n):
self.num = n
class B(A):
### Copy the item of class A to B.
def __init__(self, A):
self.num = copy.deepcopy(A.num)
### Print out the item of B
def display(self, s):
print s.num
return s
def main():
### Locally run an application "test" using Spark.
conf = SparkConf().setAppName("test").setMaster("local[2]")
### Setup the Spark configuration.
sc = SparkContext(conf = conf)
### "data" is a list to store a list of instances of class A.
data = []
for i in np.arange(5):
x = A(i)
data.append(x)
### "lines" separate "data" in Spark.
lines = sc.parallelize(data)
### Parallelly creates a list of instances of class B using
### Spark "map".
temp = lines.map(B)
### Now I got the error when it runs the following code:
### NameError: global name 'display' is not defined.
temp1 = temp.map(display)
if __name__ == "__main__":
main()
实际上,我使用上面的代码使用class B
并行生成temp = lines.map(B)
的实例列表。之后,我做了temp1 = temp.map(display)
,因为我想并行打印出class B
实例列表中的每个项目。但现在出现错误:NameError: global name 'display' is not defined.
我想知道如何修复错误,如果我仍然使用Apache-Spark并行计算。如果有人帮助我,我真的很感激。
答案 0 :(得分:4)
结构
.
├── ab.py
└── main.py
import numpy as np
from pyspark import SparkConf, SparkContext
import os
from ab import A, B
def main():
### Locally run an application "test" using Spark.
conf = SparkConf().setAppName("test").setMaster("local[2]")
### Setup the Spark configuration.
sc = SparkContext(
conf = conf, pyFiles=[
os.path.join(os.path.abspath(os.path.dirname(__file__)), 'ab.py')]
)
data = []
for i in np.arange(5):
x = A(i)
data.append(x)
lines = sc.parallelize(data)
temp = lines.map(B)
temp.foreach(lambda x: x.display())
if __name__ == "__main__":
main()
import copy
class A():
def __init__(self, n):
self.num = n
class B(A):
### Copy the item of class A to B.
def __init__(self, A):
self.num = copy.deepcopy(A.num)
### Print out the item of B
def display(self):
print self.num
评论:
for x in rdd.sample(False, 0.001).collect(): x.display()
foreach
代替map
display
方法。我不确定在这种情况下应该s