尽管使用了20个执行程序,每个使用25GB,但由于GC开销限制超出了Spark执行程序

时间:2015-08-18 15:34:50

标签: apache-spark apache-spark-sql

这个GC开销限制错误让我发疯了。我有20个执行器,每个使用25 GB我根本不明白它如何能够抛出GC开销我也不要那个大数据集。一旦执行器中发生此GC错误,它就会丢失,慢慢地其他执行程序因IOException而丢失,Rpc客户端解除关联,洗牌未找到等等。请帮助我解决这个问题我生气,因为我是Spark的新手。提前致谢。

WARN scheduler.TaskSetManager: Lost task 7.0 in stage 363.0 (TID 3373, myhost.com): java.lang.OutOfMemoryError: GC overhead limit exceeded
            at org.apache.spark.sql.types.UTF8String.toString(UTF8String.scala:150)
            at org.apache.spark.sql.catalyst.expressions.GenericRow.getString(rows.scala:120)
            at org.apache.spark.sql.columnar.STRING$.actualSize(ColumnType.scala:312)
            at org.apache.spark.sql.columnar.compression.DictionaryEncoding$Encoder.gatherCompressibilityStats(compressionSchemes.scala:224)
            at org.apache.spark.sql.columnar.compression.CompressibleColumnBuilder$class.gatherCompressibilityStats(CompressibleColumnBuilder.scala:72)
            at org.apache.spark.sql.columnar.compression.CompressibleColumnBuilder$class.appendFrom(CompressibleColumnBuilder.scala:80)
            at org.apache.spark.sql.columnar.NativeColumnBuilder.appendFrom(ColumnBuilder.scala:87)
            at org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:148)
            at org.apache.spark.sql.columnar.InMemoryRelation$$anonfun$3$$anon$1.next(InMemoryColumnarTableScan.scala:124)
            at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:277)
            at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
            at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:242)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
            at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
            at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
            at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
            at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
            at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
            at org.apache.spark.scheduler.Task.run(Task.scala:70)
            at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)

1 个答案:

答案 0 :(得分:1)

当cpu花费超过98%用于垃圾收集任务时,会抛出

超出GC开销限制。当使用不可变数据结构时,它发生在Scala中,因为对于每个转换,JVM必须重新创建许多新对象并从堆中删除先前的对象。因此,如果这是您的问题,请尝试使用一些可变数据结构。

请阅读此页面http://spark.apache.org/docs/latest/tuning.html#garbage-collection-tuning以了解如何调整GC。