EMR纱线上的火花 - EOF错误

时间:2015-09-04 12:18:44

标签: apache-spark runtime-error eof pyspark emr

我们正在Yarn上运行一些PySpark流程,当数据集的大小增加时,我们会在纱线日志中收到此错误:

 Traceback (most recent call last):
      File "/home/hadoop/spark/python/lib/pyspark.zip/pyspark/daemon.py", line 157, in manager
      File "/home/hadoop/spark/python/lib/pyspark.zip/pyspark/daemon.py", line 61, in worker
      File "/home/hadoop/spark/python/lib/pyspark.zip/pyspark/worker.py", line 136, in main
        if read_int(infile) == SpecialLengths.END_OF_STREAM:
      File "/home/hadoop/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 544, in read_int
        raise EOFError
java.net.SocketException: Socket is closed
        at java.net.Socket.shutdownOutput(Socket.java:1496)
        at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$3$$anonfun$apply$2.apply$mcV$sp(PythonRDD.scala:256)
        at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$3$$anonfun$apply$2.apply(PythonRDD.scala:256)
        at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$3$$anonfun$apply$2.apply(PythonRDD.scala:256)
        at org.apache.spark.util.Utils$.tryLog(Utils.scala:1785)
        at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:256)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
        at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:208)

我们正在运行EMR Setup 3 * m3.xlarge - 每个都有4vCPU,15GiB和2x40 GB

使用以下sh脚本执行作业:

export SPARK_HOME=/home/hadoop/spark
JARS="/home/hadoop/avro-1.7.7.jar,/home/hadoop/spark-avro-master/target/scala-2.10/spark-avro_2.10-1.0.0.jar”

$SPARK_HOME/bin/spark-submit --master yarn-cluster --py-files deploy.zip --jars $JARS main.py

其中deploy.zip包含一些实用方法和lambda函数

没有对群集进行其他配置更改。

通过查看用户界面似乎所有工作都以SUCCESS状态完成,但我们想要摆脱这个问题,或者至少要了解导致它的原因。

您是否知道错误的起源可能是什么?

谢谢!

0 个答案:

没有答案