我在大型集群上运行Spark程序(对此我没有管理权限)。工作节点上未安装numpy
。因此,我将numpy
与我的程序捆绑在一起,但是我收到以下错误:
Traceback (most recent call last):
File "/home/user/spark-script.py", line 12, in <module>
import numpy
File "/usr/local/lib/python2.7/dist-packages/numpy/__init__.py", line 170, in <module>
File "/usr/local/lib/python2.7/dist-packages/numpy/add_newdocs.py", line 13, in <module>
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/__init__.py", line 8, in <module>
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/type_check.py", line 11, in <module>
File "/usr/local/lib/python2.7/dist-packages/numpy/core/__init__.py", line 6, in <module>
ImportError: cannot import name multiarray
脚本实际上非常简单:
from pyspark import SparkConf, SparkContext
sc = SparkContext()
sc.addPyFile('numpy.zip')
import numpy
a = sc.parallelize(numpy.array([12, 23, 34, 45, 56, 67, 78, 89, 90]))
print a.collect()
我理解错误的发生是因为numpy
动态加载multiarray.so
依赖项,即使我的numpy.zip
文件包含multiarray.so
文件,动态加载也不能用于Apache Spark
。为什么这样?那么你如何创建一个带有静态链接的独立numpy
模块?
感谢。