火花内存不足错误

时间:2015-07-03 05:23:24

标签: apache-spark out-of-memory

我的火花程序在小数据集上运行良好。(约400GB) 但是当我将它扩展到大型数据集时。我开始收到错误java.lang.OutOfMemoryError: Java heap spacejava.lang.OutOfMemoryError: Requested array size exceeds VM limit

我的程序是这样的: sc.textFile - >地图 - >过滤器 - > groupBy - > saveAsObjectFile

groupBy生成RDD类型的结果[(int,Iteratable [A])]

错误发生在saveAsObjectFile。 我能想到的唯一原因是:    在groupBy步骤,某些键包含太大的数据。 但是我用Hive检查了所有密钥,最大的密钥是330808。 A级也不是很大。

我的配置是: -driver-memory 20G -num-executors 120 -executor-memory 30G Spark版本:1.4

15/07/03 07:05:06 ERROR ActorSystemImpl: Uncaught fatal error from thread 
[sparkDriver-akka.remote.default-remote-dispatcher-5] shutting down ActorSystem [sparkDriver]
java.lang.OutOfMemoryError: Java heap space
        at java.util.Arrays.copyOf(Arrays.java:2271)
        at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
        at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
        at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
        at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1876)
        at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1785)
        at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1188)
        at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
        at akka.serialization.JavaSerializer$$anonfun$toBinary$1.apply$mcV$sp(Serializer.scala:129)
        at akka.serialization.JavaSerializer$$anonfun$toBinary$1.apply(Serializer.scala:129)
        at akka.serialization.JavaSerializer$$anonfun$toBinary$1.apply(Serializer.scala:129)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at akka.serialization.JavaSerializer.toBinary(Serializer.scala:129)
        at akka.remote.MessageSerializer$.serialize(MessageSerializer.scala:36)
        at akka.remote.EndpointWriter$$anonfun$serializeMessage$1.apply(Endpoint.scala:845)
        at akka.remote.EndpointWriter$$anonfun$serializeMessage$1.apply(Endpoint.scala:845)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at akka.remote.EndpointWriter.serializeMessage(Endpoint.scala:844)
        at akka.remote.EndpointWriter.writeSend(Endpoint.scala:747)
        at akka.remote.EndpointWriter$$anonfun$2.applyOrElse(Endpoint.scala:722)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
        at akka.remote.EndpointActor.aroundReceive(Endpoint.scala:415)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
        at akka.dispatch.Mailbox.run(Mailbox.scala:220)

2 个答案:

答案 0 :(得分:1)

驱动程序OutOfMemory的快速解决方案是使用“spark.driver.memory”属性增加驱动程序内存。

下面的文章可能有助于驱动程序和执行程序的内存分配 http://www.wdong.org/wordpress/blog/2015/01/08/spark-on-yarn-where-have-all-my-memory-gone/

另请注意 GroupByKey操作费用较高。所以尽量避免使用reduceByKey。

http://databricks.gitbooks.io/databricks-spark-knowledge-base/content/best_practices/prefer_reducebykey_over_groupbykey.html

答案 1 :(得分:1)

您的工作可能不均衡,因此一些分区会获得很多密钥(及其值)。您可以尝试添加更多分区和/或编写自定义分区程序,根据您对数据的了解使分区均匀化