Pyspark java.lang.OutOfMemoryError:请求的数组大小超过VM限制

时间:2015-08-11 14:46:53

标签: python scala hadoop apache-spark pyspark

我正在经营Pyspark工作:

spark-submit --master yarn-client --driver-memory 150G --num-executors 8 --executor-cores 4 --executor-memory 150G benchmark_script_1.py hdfs:///tmp/data/sample150k 128 hdfs:///tmp/output/sample150k | tee ~/output/sample150k.log

工作本身很标准。它只是抓取一些文件并计算它们。:

print(str(datetime.now()) + " - Ingesting files...")
files = sc.wholeTextFiles(inputFileDir, partitions)
fileCount = files.count()
print(str(datetime.now()) + " - " + str(fileCount) + " files ingested")

源文件夹包含~150' 000个文件。它没有复制的35G和复制的105G。相当沉重但不疯狂。

运行上面的内容给出了以下stacktrace:

15/08/11 15:39:20 WARN TaskSetManager: Lost task 61.3 in stage 0.0 (TID 76, <NODE>): java.io.IOException: Filesystem closed
        at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:794)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:833)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:897)
        at java.io.DataInputStream.read(DataInputStream.java:100)
        at org.spark-project.guava.io.ByteStreams.copy(ByteStreams.java:207)
        at org.spark-project.guava.io.ByteStreams.toByteArray(ByteStreams.java:252)
        at org.apache.spark.input.WholeTextFileRecordReader.nextKeyValue(WholeTextFileRecordReader.scala:83)
        at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:69)
        at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:143)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
        at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:405)
        at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:243)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
        at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:205)

可以在违规的执行程序日志中找到更多信息:

15/08/11 12:28:18 ERROR executor.CoarseGrainedExecutorBackend: RECEIVED SIGNAL 15: SIGTERM
15/08/11 12:28:18 ERROR util.Utils: Uncaught exception in thread stdout writer for python
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
        at java.lang.StringCoding.encode(StringCoding.java:350)
        at java.lang.String.getBytes(String.java:939)
        at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:573)
        at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:395)
        at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:405)
        at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:405)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
        at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:405)
        at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:243)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
        at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:205)
Traceback (most recent call last):
  File "/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/jars/spark-assembly-1.3.0-cdh5.4.2-hadoop2.6.0-cdh5.4.2.jar/pyspark/daemon.py", line 162, in manager
  File "/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/jars/spark-assembly-1.3.0-cdh5.4.2-hadoop2.6.0-cdh5.4.2.jar/pyspark/daemon.py", line 60, in worker
  File "/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/jars/spark-assembly-1.3.0-cdh5.4.2-hadoop2.6.0-cdh5.4.2.jar/pyspark/worker.py", line 126, in main
    if read_int(infile) == SpecialLengths.END_OF_STREAM:
  File "/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/jars/spark-assembly-1.3.0-cdh5.4.2-hadoop2.6.0-cdh5.4.2.jar/pyspark/serializers.py", line 528, in read_int
15/08/11 12:28:18 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[stdout writer for python,5,main]
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
        at java.lang.StringCoding.encode(StringCoding.java:350)
        at java.lang.String.getBytes(String.java:939)
        at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:573)
        at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:395)
        at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:405)
        at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:405)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
        at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:405)
        at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:243)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
        at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:205)
    raise EOFError
EOFError
Traceback (most recent call last):
  File "/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/jars/spark-assembly-1.3.0-cdh5.4.2-hadoop2.6.0-cdh5.4.2.jar/pyspark/daemon.py", line 162, in manager
  File "/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/jars/spark-assembly-1.3.0-cdh5.4.2-hadoop2.6.0-cdh5.4.2.jar/pyspark/daemon.py", line 60, in worker
  File "/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/jars/spark-assembly-1.3.0-cdh5.4.2-hadoop2.6.0-cdh5.4.2.jar/pyspark/worker.py", line 126, in main
    if read_int(infile) == SpecialLengths.END_OF_STREAM:
  File "/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/jars/spark-assembly-1.3.0-cdh5.4.2-hadoop2.6.0-cdh5.4.2.jar/pyspark/serializers.py", line 528, in read_int
15/08/11 12:28:18 ERROR executor.Executor: Exception in task 7.0 in stage 0.0 (TID 5)
java.io.IOException: Filesystem closed
        at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:794)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:833)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:897)
        at java.io.DataInputStream.read(DataInputStream.java:100)
        at org.spark-project.guava.io.ByteStreams.copy(ByteStreams.java:207)
        at org.spark-project.guava.io.ByteStreams.toByteArray(ByteStreams.java:252)
        at org.apache.spark.input.WholeTextFileRecordReader.nextKeyValue(WholeTextFileRecordReader.scala:83)
        at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:69)
        at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:143)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
        at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:405)
        at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:243)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
        at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:205)
    raise EOFError
EOFError

我已禁用HDFS缓存:

conf.set("fs.hdfs.impl.disable.cache", True)

请注意,Scala中完全相同的脚本根本没有任何问题。

虽然这是一项很大的工作,但是有大量的内存可用。谁知道问题可能是什么?

更新

为JVM分配了更多内存。

export set JAVA_OPTS="-Xmx6G -XX:MaxPermSize=2G -XX:+UseCompressedOops"

可悲的是,没有任何进步。

1 个答案:

答案 0 :(得分:2)

我遇到类似spark-submit和Java的问题,节省了8GB的DataFrame。 Docker容器,配备16核,300GB RAM。我还没有解决问题,但我遇到了几个可能的解决方法:

从第77页开始,Lightbend表明这是shell的问题,使用@transient或封装在对象中可能是一种解决方法。这似乎不适用于我们的任何一种情况。

DataBricks表示增加spark.sql.shuffle.partitions可能有所帮助。他们建议从默认的'200'改为'400'。我在spark-defaults.conf中尝试了'800'和'2000'但仍然出现了OOM错误。

DataBricks还建议在代码中调用DataFrame.repartition(400)。或者,将partitions的数量增加为调用sc.wholeTextFiles(inputFileDir, partitions)

的最后一个参数

来自StackOverflowJAVA_OPTS建议不适用,因为如果堆大小超过32GB,则-XX:+UseCompressedOops被禁用(在Java 8中)

修改

也尝试过:

  • spark.default.parallelism=1000(默认为核心数)。仍然是OOM错误。
  • 代码中的
  • dataFrame.repartition(1000)。仍然是OOM错误。

可能的解决方法

  • 使用中间RDD<LabeledPoint>允许我创建DataFrame,但Spark反射模式不适用于MLLib(缺少numClasses属性)。

    DataFrame df = sqlContext.createDataFrame(sc.parallelize(List<LabeledPoint>),LabeledPoint.class)

  • 使用中间JSON文件允许我创建DataFrame以使用MLLib。

    saveAsJson(List<Row>/*generated data*/, filename); DataFrame df = sqlContext.read().json(filename)