Spark请求的数组大小超过BufferHolder.grow的VM限制

时间:2018-04-09 21:34:34

标签: scala apache-spark apache-spark-sql out-of-memory spark-dataframe

我在混合的scala-python应用程序(类似于Zeppelin)上运行在Hadoop集群上的Spark 2.1上出现此错误:

18/04/09 08:19:34 ERROR Utils: Uncaught exception in thread stdout writer for /x/python/miniconda/bin/python
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
    at org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder.grow(BufferHolder.java:73)
    at org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter.write(UnsafeRowWriter.java:214)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply6_4$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply7_16$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
    at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$generateResultProjection$1.apply(AggregationIterator.scala:232)
    at org.apache.spark.sql.execution.aggregate.AggregationIterator$$anonfun$generateResultProjection$1.apply(AggregationIterator.scala:221)
    at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:159)
    at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:29)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:1076)
    at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:1091)
    at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1129)
    at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1132)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:504)
    at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:328)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
    at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269)

由于BufferHolder.grow包含显式检查,因此抛出此类错误似乎很奇怪:

if (neededSize > Integer.MAX_VALUE - totalSize()) {
  throw new UnsupportedOperationException(
    "Cannot grow BufferHolder by size " + neededSize + " because the size after growing " +
      "exceeds size limitation " + Integer.MAX_VALUE);
}

然而,在运行时,它会通过此断言来初始化大小大于Integer.MAX_VALUE的数组(第73行)。这个错误似乎与配置调优无关(如果我错了,请纠正我),所以我会跳过应用程序/集群的规范,除了 - 150个执行程序,每个2个核心。 spark.sql.shuffle.partitions设置为8000,以消除随机偏斜。

PythonRDD的父RDD实际上是一个DataFrame,它是一个shuffle的结果,它有~30列,其中一个是一个非常大的String类型(最多100MB,但平均150KB)。我之所以提到这一点,是因为从堆栈跟踪看起来错误是在shuffle read和PythonRDD之间的某处产生的。此外,这总是发生在最后10%的分区(输入数据是静态的),前90%完成没有错误。

有没有人遇到过这个问题?或者可以对此有所了解?

1 个答案:

答案 0 :(得分:1)

这是内部Spark问题,如此处所述 - https://issues.apache.org/jira/browse/SPARK-22033并在2.3.0中解决