ImportError:在spark worker上没有名为numpy的模块

时间:2016-02-05 00:22:10

标签: python numpy apache-spark pyspark

以客户端模式启动pyspark。 bin/pyspark --master yarn-client --num-executors 60 shell上的import numpy很好但是在kmeans中失败了。不知何故,执行者没有安装numpy是我的感觉。我没有找到任何好的解决方案让工人知道numpy。我尝试设置PYSPARK_PYTHON,但这也没有用。

import numpy
features = numpy.load(open("combined_features.npz"))
features = features['arr_0']
features.shape
features_rdd = sc.parallelize(features, 5000)
from pyspark.mllib.clustering import KMeans, KMeansModel

from numpy import array
from math import sqrt
clusters = KMeans.train(features_rdd, 2, maxIterations=10, runs=10, initializationMode="random")

堆栈跟踪

 org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/hadoop/3/scratch/local/usercache/ajkale/appcache/application_1451301880705_525011/container_1451301880705_525011_01_000011/pyspark.zip/pyspark/worker.py", line 98, in main
    command = pickleSer._read_with_length(infile)
  File "/hadoop/3/scratch/local/usercache/ajkale/appcache/application_1451301880705_525011/container_1451301880705_525011_01_000011/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
    return self.loads(obj)
  File "/hadoop/3/scratch/local/usercache/ajkale/appcache/application_1451301880705_525011/container_1451301880705_525011_01_000011/pyspark.zip/pyspark/serializers.py", line 422, in loads
    return pickle.loads(obj)
  File "/hadoop/3/scratch/local/usercache/ajkale/appcache/application_1451301880705_525011/container_1451301880705_525011_01_000011/pyspark.zip/pyspark/mllib/__init__.py", line 25, in <module>

ImportError: No module named numpy

        at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
        at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
        at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
        at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:262)
        at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:99)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
        enter code here

7 个答案:

答案 0 :(得分:16)

要在Yarn客户端模式下使用Spark,您需要在Yarn启动执行程序的计算机上安装任何依赖项。这是使这项工作唯一可靠的方法。

使用Spark with Yarn集群模式是另一回事。您可以使用spark-submit分发python依赖项。

spark-submit --master yarn-cluster my_script.py --py-files my_dependency.zip

然而,numpy的情况很复杂,因为它使得它如此之快:在C中繁重的事实。由于它的安装方式,你无法分发以这种方式n。不安。

答案 1 :(得分:1)

工作(虚拟)计算机上未安装

numpy。如果使用anaconda,在群集模式下部署应用程序时,上传此类python依赖项非常方便。 (所以不需要在每台机器上安装numpy或其他模块,而是必须在你的anaconda中安装)。 首先,压缩您的anaconda并将zip文件放入群集,然后您可以使用以下脚本提交作业。

 spark-submit \
 --master yarn \
 --deploy-mode cluster \
 --archives hdfs://host/path/to/anaconda.zip#python-env
 --conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=pthon-env/anaconda/bin/python 
 app_main.py

Yarn会将anaconda.zip从hdfs路径复制到每个worker,并使用pthon-env / anaconda / bin / python来执行任务。

参考Running PySpark with Virtualenv可能会提供更多信息。

答案 2 :(得分:0)

我有类似的问题,但我认为你不需要设置PYSPARK_PYTHON而只需在工作机器上安装numpy(apt-get或yum)。该错误还会告诉您哪个机器缺少导入。

答案 3 :(得分:0)

sudo pip install numpy

似乎用&#34; sudo&#34;重新安装了numpy,可以找到这个模块。

答案 4 :(得分:0)

我有同样的问题。如果您使用的是Python3,请尝试在pip3上安装numpy

pip3 install numpy

答案 5 :(得分:0)

您必须意识到,每个工作人员,甚至主服务器本身都需要安装numpy(取决于组件的位置)

还要确保在将umask强制为022(pip install numpy)后从根帐户启动umask 022命令(sudo不够),以便它级联Spark(或Zeppelin)用户的权限

答案 6 :(得分:0)

为我(在mac上解决它的问题实际上是本指南(其中还介绍了如何通过Jupyter Notebooks运行python- https://medium.com/@yajieli/installing-spark-pyspark-on-mac-and-fix-of-some-common-errors-355a9050f735

简而言之: (假设您使用brew install spark安装了spark)

  1. 使用-SPARK_PATH
  2. 查找brew info apache-spark
  3. 将这些行添加到您的~/.bash_profile
# Spark and Python
######
export SPARK_PATH=/usr/local/Cellar/apache-spark/2.4.1
export PYSPARK_DRIVER_PYTHON="jupyter"
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
#For python 3, You have to add the line below or you will get an error
export PYSPARK_PYTHON=python3
alias snotebook='$SPARK_PATH/bin/pyspark --master local[2]'
######
  1. 只需调用以下命令即可打开Jupyter Notebookpyspark

请记住,您不需要设置Spark Context,只需调用:

sc = SparkContext.getOrCreate()