我试图编写一些依赖于我想要包含在作业中的模块的pyspark作业,而不是全局安装在集群上。
我决定尝试使用zip文件,但我似乎无法让它工作,我似乎无法在野外找到这样做的例子。
我通过运行来构建zip:
mkdir -p ./build
cd ./build && python ../src/setup.py sdist --formats=zip
这会创建一个名为./build/dist/mysparklib-0.1.zip
的文件。到目前为止,非常好。
我的工作看起来像这样:
from pyspark import SparkContext
# See: http://spark.apache.org/docs/latest/quick-start.html
readme_filename = './README.md'
sc = SparkContext('local', 'helloworld app')
readme_data = sc.textFile(readme_filename).cache()
def test_a_filter(s):
import mysparklib
return 'a' in s
a_s = readme_data.filter(test_a_filter).count()
b_s = readme_data.filter(lambda s: 'b' in s).count()
print("""
**************************************
* Lines with a: {}; Lines with b: {} *
**************************************
""".format(a_s, b_s))
sc.stop()
(这主要是从快速入门中采用的,但我试图在其中一个过滤器中导入我的模块除外。)
我通过运行来开始工作:
spark-submit --master local[4] --py-files './build/dist/mysparklib-0.1.zip' ./jobs/helloworld.py
虽然我看到包含的zip文件:
17/05/17 17:15:31 INFO SparkContext: Added file file:/Users/myuser/dev/mycompany/myproject/./build/dist/mysparklib-0.1.zip at file:/Users/myuser/dev/mycompany/myproject/./build/dist/mysparklib-0.1.zip with timestamp 1495055731604
它不会导入:
17/05/17 17:15:34 INFO DAGScheduler: ResultStage 0 (count at /Users/myuser/dev/mycompany/myproject/./jobs/helloworld.py:15) failed in 1.162 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/Users/myuser/dev/mycompany/myproject/spark/python/lib/pyspark.zip/pyspark/worker.py", line 174, in main
process()
File "/Users/myuser/dev/mycompany/myproject/spark/python/lib/pyspark.zip/pyspark/worker.py", line 169, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/Users/myuser/dev/mycompany/myproject/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2408, in pipeline_func
File "/Users/myuser/dev/mycompany/myproject/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2408, in pipeline_func
File "/Users/myuser/dev/mycompany/myproject/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2408, in pipeline_func
File "/Users/myuser/dev/mycompany/myproject/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 345, in func
File "/Users/myuser/dev/mycompany/myproject/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1040, in <lambda>
File "/Users/myuser/dev/mycompany/myproject/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1040, in <genexpr>
File "/Users/myuser/dev/mycompany/myproject/./jobs/helloworld.py", line 12, in test_a_filter
import mysparklib
ModuleNotFoundError: No module named 'mysparklib'
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
为了进行健全性检查,我在mysparklib中运行python setup.py develop
并尝试将其导入cli,并且可以顺利运行。
有什么想法吗?
答案 0 :(得分:1)
所以我得到了这个工作!核心问题是,当将zip添加到模块路径时,sdist的目录结构不是python期望的结构(这是--py-files
的工作方式;您可以通过打印sys.path
来确认)。特别是,sdist zip包含文件./mysparklib-0.1/mysparklib/__init__.py
,但我们需要一个带文件./mysparklib/__init__.py
的zip。
所以不要运行
cd ./build && python ../src/setup.py sdist --formats=zip
我现在正在运行
cd ./src && zip ../dist/mysparklib.zip -r ./mysparklib
并且有效。