我已经编写了一个python代码,用于汇总每个csv文件的第一列中的所有数字,如下所示:
import os, sys, inspect, csv
### Current directory path.
curr_dir = os.path.split(inspect.getfile(inspect.currentframe()))[0]
### Setup the environment variables
spark_home_dir = os.path.realpath(os.path.abspath(os.path.join(curr_dir, "../spark")))
python_dir = os.path.realpath(os.path.abspath(os.path.join(spark_home_dir, "./python")))
os.environ["SPARK_HOME"] = spark_home_dir
os.environ["PYTHONPATH"] = python_dir
### Setup pyspark directory path
pyspark_dir = python_dir
sys.path.append(pyspark_dir)
### Import the pyspark
from pyspark import SparkConf, SparkContext
### Specify the data file directory, and load the data files
data_path = os.path.realpath(os.path.abspath(os.path.join(curr_dir, "./test_dir")))
### myfunc is to add all numbers in the first column.
def myfunc(s):
total = 0
if s.endswith(".csv"):
cr = csv.reader(open(s,"rb"))
for row in cr:
total += int(row[0])
return total
def main():
### Initialize the SparkConf and SparkContext
conf = SparkConf().setAppName("ruofan").setMaster("spark://ec2-52-26-177-197.us-west-2.compute.amazonaws.com:7077")
sc = SparkContext(conf = conf)
datafile = sc.wholeTextFiles(data_path)
### Sent the application in each of the slave node
temp = datafile.map(lambda (path, content): myfunc(str(path).strip('file:')))
### Collect the result and print it out.
for x in temp.collect():
print x
if __name__ == "__main__":
main()
我想使用Apache-Spark来使用相同的python代码并行化几个csv文件的求和过程。我已经完成了以下步骤:
$ scp -r -i my-key-pair.pem my_dir root@ec2-52-27-82-124.us-west-2.compute.amazonaws.com
将目录my_dir
(包括带有csv文件的python代码)上传到群集主节点上。$ ./spark/copy-dir my_dir
将我的python代码以及csv文件发送到所有从属节点。我在主节点上设置了环境变量:
$ export SPARK_HOME=~/spark
$ export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
但是,当我在主节点上运行python代码:$ python sum.py
时,它会显示以下错误:
Traceback (most recent call last):
File "sum.py", line 18, in <module>
from pyspark import SparkConf, SparkContext
File "/root/spark/python/pyspark/__init__.py", line 41, in <module>
from pyspark.context import SparkContext
File "/root/spark/python/pyspark/context.py", line 31, in <module>
from pyspark.java_gateway import launch_gateway
File "/root/spark/python/pyspark/java_gateway.py", line 31, in <module>
from py4j.java_gateway import java_import, JavaGateway, GatewayClient
ImportError: No module named py4j.java_gateway
我对此错误一无所知。另外,我想知道主节点是否自动调用所有从节点并行运行。如果有人能帮助我,我真的很感激。
答案 0 :(得分:2)
以下是调试此特定导入错误的方法。
$ python
>> from py4j.java_gateway import java_import, JavaGateway, GatewayClient
>> import py4j
>> exit()
$ pip install py4j
(您需要安装pip)$ python
>> from py4j.java_gateway import java_import, JavaGateway, GatewayClient
>> exit()
并尝试再次运行$ python sum.py
答案 1 :(得分:0)
我想你问的是两个不同的问题。看起来你有一个导入错误。您是否可能在本地计算机上安装了尚未安装在主节点上的不同版本的py4j版本?
我无法帮助并行运行。