我在Spark 1.5.2上有“超出GC开销限制”(每20小时可重现一次)

时间:2016-09-06 08:18:18

标签: java apache-spark spark-streaming

我在Spark 1.5.2上有“超出GC开销限制”(每20小时可重现一次) 我的代码中没有内存泄漏。可能是Spark的错吗?从Spark 1.6.0开始,他们改变了内存管理,它会解决这个问题吗?

2016-09-05 19:40:56,714 WARN TaskSetManager: Lost task 11.0 in stage 13155.0 (TID 47982, datanode004.current.rec.mapreduce.m1.p.fti.net): java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.util.IdentityHashMap.resize(IdentityHashMap.java:471)
    at java.util.IdentityHashMap.put(IdentityHashMap.java:440)
    at org.apache.spark.util.SizeEstimator$SearchState.enqueue(SizeEstimator.scala:159)
    at org.apache.spark.util.SizeEstimator$$anonfun$visitSingleObject$1.apply(SizeEstimator.scala:203)
    at org.apache.spark.util.SizeEstimator$$anonfun$visitSingleObject$1.apply(SizeEstimator.scala:202)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:202)
    at org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:186)
    at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:54)
    at org.apache.spark.util.collection.SizeTracker$class.takeSample(SizeTracker.scala:78)
    at org.apache.spark.util.collection.SizeTracker$class.afterUpdate(SizeTracker.scala:70)
    at org.apache.spark.util.collection.SizeTrackingVector.$plus$eq(SizeTrackingVector.scala:31)
    at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:278)
    at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
    at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:262)
    at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:262)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)

2016-09-05 19:40:56,725 WARN TaskSetManager: Lost task 7.0 in stage 13155.0 (TID 47978, datanode004.current.rec.mapreduce.m1.p.fti.net): java.io.FileNotFoundException: /var/opt/hosting/data/disk1/hadoop/yarn/usercache/nlevert/appcache/application_1472802379984_2249/blockmgr-f71761be-e12b-4bbc-bf38-9e6f7ddbb3a2/14/shuffle_2171_7_0.data (No such file or directory)
    at java.io.FileOutputStream.open0(Native Method)
    at java.io.FileOutputStream.open(FileOutputStream.java:270)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
    at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:88)
    at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:177)
    at org.apache.spark.util.collection.WritablePartitionedPairCollection$$anon$1.writeNext(WritablePartitionedPairCollection.scala:55)
    at org.apache.spark.util.collection.ExternalSorter.writePartitionedFile(ExternalSorter.scala:681)
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:80)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Memory consumption

2 个答案:

答案 0 :(得分:0)

在我遇到的类似情况下,增加记忆力解决了这个问题。请尝试以下方法:

spark-submitspark-shell添加以下参数:

executor.memory=6G设置工人的记忆 driver.memory=6G设置驱动程序的内存

在你的情况下,第一个可能会有所帮助

答案 1 :(得分:0)

其他选项是数据必须以文本格式保存在spark中。请尝试对内存中维护的数据使用串行化和压缩。试试这个:

  

val conf = new SparkConf()。setAppName(“Test-App”)   conf.set( “spark.serialization”, “org.apache.spark.serializer.KyroSerializer”)   conf.set( “spark.io.compression.codec”, “org.apache.spark.io.SnappyCompressionCodec”)