Spark:java.lang.OutOfMemoryError:超出了GC开销限制

时间:2015-09-01 00:16:46

标签: scala apache-spark

我有以下代码来转换我从输入文件中读取数据并创建一个pairedrdd,然后将其转换为Map以供将来查找。然后我映射这个广播变量。这是几GB的地图。有没有办法以更有效的方式做collectAsMap()或用其他一些电话取代它?

val result_paired_rdd = prods_user_flattened.collectAsMap() 

sc.broadcast(result_paired_rdd)

我收到以下错误。我还尝试了以下参数:--executor-memory 7Gspark-submit命令。

15/08/31 08:29:51 INFO BlockManagerInfo: Removed taskresult_48 on host3:48924 in memory (size: 11.4 MB, free: 3.6 GB)
15/08/31 08:29:51 INFO BlockManagerInfo: Added taskresult_50 in memory on host3:48924 (size: 11.6 MB, free: 3.6 GB)
15/08/31 08:29:52 INFO BlockManagerInfo: Added taskresult_51 in memory on host2:60182 (size: 11.6 MB, free: 3.6 GB)
15/08/31 08:30:02 ERROR Utils: Uncaught exception in thread task-result-getter-0
java.lang.OutOfMemoryError: GC overhead limit exceeded
            at java.util.Arrays.copyOfRange(Arrays.java:2694)
            at java.lang.String.<init>(String.java:203)
            at com.esotericsoftware.kryo.io.Input.readString(Input.java:448)
            at com.esotericsoftware.kryo.serializers.DefaultSerializers$StringSerializer.read(DefaultSerializers.java:157)
            at com.esotericsoftware.kryo.serializers.DefaultSerializers$StringSerializer.read(DefaultSerializers.java:146)
            at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
            at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:42)
            at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:33)
            at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
            at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:338)
            at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:293)
            at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
            at org.apache.spark.serializer.KryoSerializerInstance.deserialize(KryoSerializer.scala:173)
            at org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:79)
            at org.apache.spark.scheduler.TaskSetManager.handleSuccessfulTask(TaskSetManager.scala:621)
            at org.apache.spark.scheduler.TaskSchedulerImpl.handleSuccessfulTask(TaskSchedulerImpl.scala:379)
            at org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:82)
            at org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
            at org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
            at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
            at org.apache.spark.scheduler.TaskResultGetter$$anon$2.run(TaskResultGetter.scala:50)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

1 个答案:

答案 0 :(得分:0)

从日志中看,驱动程序的内存不足。

对于某些操作,例如收集,来自所有工作人员的rdd数据将传输到驱动程序JVM。

  1. 检查驱动程序JVM设置
  2. 避免将大量数据收集到驱动程序JVM上