SparkException:Python工作者没有及时连接

时间:2015-07-04 21:57:04

标签: python hadoop apache-spark yarn pyspark

我正在尝试将Python作业提交给2个工作节点Spark集群,但我一直看到以下问题,最终导致spark-submit失败:

15/07/04 21:30:40 WARN scheduler.TaskSetManager: Lost task 0.1 in stage 0.0 (TID
 2, workernode0.rhom-spark.b9.internal.cloudapp.net):    
org.apache.spark.SparkException: Python worker did not connect back in time
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:135)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:64)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:102)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:278)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:245)
    at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:305)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:278)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:245)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:56)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)

    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: Accept timed out
    at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)

    at java.net.DualStackPlainSocketImpl.socketAccept(DualStackPlainSocketImpl.java:135)
    at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
    at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:199)
    at java.net.ServerSocket.implAccept(ServerSocket.java:530)
    at java.net.ServerSocket.accept(ServerSocket.java:498)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:130)
    ... 15 more

我使用以下

提交作业
spark-submit --master yarn --py-files tile.py --num-executors 1 --executor-memory 2g main.py

有什么想法吗?

1 个答案:

答案 0 :(得分:2)

因此,当python worker进程无法连接到spark执行器JVM时会发生这种情况。 Spark使用套接字与工作进程通信。导致这种情况发生的原因有很多,具体的具体细节可能会出现在执行者/工作者机器的日志中。