Spark群集OutOfMemoryError需要更多内存

时间:2018-06-18 14:47:42

标签: azure apache-spark databricks

我有一个带有28gb驱动程序和8x 56gb工作程序的火花簇。我正在尝试处理4gb文件。我可以成功处理这个文件而不在我自己的笔记本电脑上使用16gb内存上的spark,因此没有内存泄漏导致使用完整的56gb,它也可以处理较小的样本文件。我用azure databricks提交这份工作(虽然这应该是无关紧要的)。我的spark群集上的所有配置都是默认配置。

有问题的代码

  var ou = distData.map(s => ProcessObj.exec(s.toString) ).collect

完整堆栈跟踪:

java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at org.apache.spark.util.ByteBufferOutputStream.write(ByteBufferOutputStream.scala:41)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1853)
at java.io.ObjectOutputStream.write(ObjectOutputStream.java:709)
at java.nio.channels.Channels$WritableByteChannelImpl.write(Channels.java:458)
at org.apache.spark.util.SerializableBuffer$$anonfun$writeObject$1.apply(SerializableBuffer.scala:49)
at org.apache.spark.util.SerializableBuffer$$anonfun$writeObject$1.apply(SerializableBuffer.scala:47)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1359)
at org.apache.spark.util.SerializableBuffer.writeObject(SerializableBuffer.scala:47)
at sun.reflect.GeneratedMethodAccessor153.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1128)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:43)
at org.apache.spark.rpc.netty.RequestMessage.serialize(NettyRpcEnv.scala:565)
at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:193)
at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:528)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anonfun$launchTasks$1.apply(CoarseGrainedSchedulerBackend.scala:315)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anonfun$launchTasks$1.apply(CoarseGrainedSchedulerBackend.scala:293)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)

更新

尝试使用以下spark配置并仍然出现内存不足错误:

spark.executor.memory 40g
spark.driver.memory 20g

1 个答案:

答案 0 :(得分:0)

默认情况下,

spark.driver.memory为1GB。对于预配置的Azure上的Spark可能会有所不同,但它似乎仍然少于您尝试处理的数据量。

然后你在DataFrame上调用collect(),结果内容将被转移到驱动程序主机,因此它应该能够存储在驱动程序进程的内存中,这是一个具有预设可用内存量的JVM进程(它通常无法调整到所有主机内存的大小。)