org.apache.spark.SparkException:pyspark.daemon的标准输出中没有端口号

时间:2018-12-20 08:36:42

标签: pyspark

我正在Hadoop-Yarn集群上执行spark-submit作业。

提交火花/opt/spark/examples/src/main/python/pi.py 1000

,但在错误消息下方。看来是工人没有开始。

  2018-12-20 07:25:14 INFO  SparkContext:54 - Created broadcast 0 from    broadcast at DAGScheduler.scala:1161
  2018-12-20 07:25:14 INFO  DAGScheduler:54 - Submitting 1000 missing tasks from ResultStage 0 (PythonRDD[1] at reduce at /opt/spark/examples/src/main/python/pi.py:44) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14))
   2018-12-20 07:25:14 INFO  YarnScheduler:54 - Adding task set 0.0 with 1000 tasks
   2018-12-20 07:25:14 INFO  TaskSetManager:54 - Starting task 0.0 in stage 0.0 (TID 0, hadoop-slave2, executor 1, partition 0, PROCESS_LOCAL, 7863 bytes)
   2018-12-20 07:25:14 INFO  TaskSetManager:54 - Starting task 1.0 in stage 0.0 (TID 1, hadoop-slave1, executor 2, partition 1, PROCESS_LOCAL, 7863 bytes)
  2018-12-20 07:25:15 INFO  BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on hadoop-slave2:37217 (size: 4.2 KB, free: 93.3 MB)
  2018-12-20 07:25:15 INFO  BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on hadoop-slave1:35311 (size: 4.2 KB, free: 93.3 MB)
2018-12-20 07:25:15 INFO  TaskSetManager:54 - Starting task 2.0 in stage 0.0 (TID 2, hadoop-slave2, executor 1, partition 2, PROCESS_LOCAL, 7863 bytes)
2018-12-20 07:25:15 INFO  TaskSetManager:54 - Starting task 3.0 in stage 0.0 (TID 3, hadoop-slave1, executor 2, partition 3, PROCESS_LOCAL, 7863 bytes)
2018-12-20 07:25:16 WARN  TaskSetManager:66 - Lost task 0.0 in stage 0.0 (TID 0, hadoop-slave2, executor 1): org.apache.spark.SparkException: 
Error from python worker:
Traceback (most recent call last):
File "/usr/lib64/python2.6/runpy.py", line 104, in _run_module_as_main
  loader, code, fname = _get_module_details(mod_name)
File "/usr/lib64/python2.6/runpy.py", line 79, in _get_module_details
  loader = get_loader(mod_name)
File "/usr/lib64/python2.6/pkgutil.py", line 456, in get_loader
  return find_loader(fullname)
File "/usr/lib64/python2.6/pkgutil.py", line 466, in find_loader
  for importer in iter_importers(fullname):
File "/usr/lib64/python2.6/pkgutil.py", line 422, in iter_importers
  __import__(pkg)
 File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/__init__.py", line 51, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/context.py", line 31, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/accumulators.py", line 97, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/serializers.py", line 71, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/cloudpickle.py", line 246, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/cloudpickle.py", line 270, in CloudPickler
 NameError: name 'memoryview' is not defined
 PYTHONPATH was:
 /tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/filecache/21/__spark_libs__3793296165132209773.zip/spark-core_2.11-2.4.0.jar:    /tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip:/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/py4j-0.10.7-src.zip
org.apache.spark.SparkException: No port number in pyspark.daemon's stdout
at   org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:204)
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:122)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:95)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at         org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)

1 个答案:

答案 0 :(得分:0)

我相信当Python版本不匹配时会发生此问题。

在我的〜/ .bash_profile中添加以下内容对我有用:

alias spark-submit='PYSPARK_PYTHON=$(which python) spark-submit'

它应该强制spark使用您在模块中加载的相同版本的python。