在intelij中,转换为jar文件后在spark-submit中运行jar文件时发生错误

时间:2019-03-04 00:23:44

标签: scala apache-spark intellij-idea garbage-collection spark-submit

为什么在转换为jar文件后在spark-submit中运行jar文件时会发生错误,但是它在intelij上运行? spark-submit --class com.aa xx.jar --driver-memory 6g --executor-memory 4g

即使增加和减少内存,结果也始终相同。

如果您能向我展示intelij和spark-submit之间的区别以及如何解决它,我将不胜感激。

19/03/04 09:01:11 INFO ExternalAppendOnlyMap: Thread 67 spilling in-memory map of 371.3 MB to disk (1 time so far)
19/03/04 09:01:14 INFO ExternalAppendOnlyMap: Thread 67 spilling in-memory map of 371.3 MB to disk (2 times so far)
19/03/04 09:01:17 INFO ExternalAppendOnlyMap: Thread 67 spilling in-memory map of 371.3 MB to disk (3 times so far)
19/03/04 09:01:19 INFO ExternalAppendOnlyMap: Thread 67 spilling in-memory map of 371.3 MB to disk (4 times so far)
19/03/04 09:01:23 INFO BlockManagerInfo: Removed broadcast_4_piece0 on 10.60.130.110:59424 in memory (size: 7.8 KB, free: 366.3 MB)
19/03/04 09:02:28 WARN TaskMemoryManager: leak 249.3 MB memory from org.apache.spark.util.collection.ExternalAppendOnlyMap@1d39d583
19/03/04 09:02:30 ERROR Executor: Exception in task 0.0 in stage 5.0 (TID 5)
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:68)
        at java.lang.StringBuilder.<init>(StringBuilder.java:89)
        at java.io.ObjectInputStream$BlockDataInputStream.readUTFBody(ObjectInputStream.java:3400)
        at java.io.ObjectInputStream$BlockDataInputStream.readUTF(ObjectInputStream.java:3220)
        at java.io.ObjectInputStream.readString(ObjectInputStream.java:1900)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1559)
        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1970)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1562)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2282)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2206)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2064)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1568)
        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1970)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1562)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2282)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2206)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2064)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1568)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:428)
        at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
        at org.apache.spark.serializer.DeserializationStream.readValue(Serializer.scala:159)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$DiskMapIterator.readNextItem(ExternalAppendOnlyMap.scala:515)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$DiskMapIterator.hasNext(ExternalAppendOnlyMap.scala:535)
        at scala.collection.Iterator$$anon$1.hasNext(Iterator.scala:1004)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$ExternalIterator.org$apache$spark$util$collection$ExternalAppendOnlyMap$ExternalIterator$$readNextHashCode(ExternalAppendOnlyMap.scala:336)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$ExternalIterator$$anonfun$next$1.apply(ExternalAppendOnlyMap.scala:409)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$ExternalIterator$$anonfun$next$1.apply(ExternalAppendOnlyMap.scala:407)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$ExternalIterator.next(ExternalAppendOnlyMap.scala:407)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$ExternalIterator.next(ExternalAppendOnlyMap.scala:302)
        at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
19/03/04 09:02:32 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main]
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:68)
        at java.lang.StringBuilder.<init>(StringBuilder.java:89)
        at java.io.ObjectInputStream$BlockDataInputStream.readUTFBody(ObjectInputStream.java:3400)
        at java.io.ObjectInputStream$BlockDataInputStream.readUTF(ObjectInputStream.java:3220)
        at java.io.ObjectInputStream.readString(ObjectInputStream.java:1900)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1559)
        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1970)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1562)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2282)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2206)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2064)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1568)
        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1970)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1562)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2282)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2206)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2064)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1568)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:428)
        at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
        at org.apache.spark.serializer.DeserializationStream.readValue(Serializer.scala:159)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$DiskMapIterator.readNextItem(ExternalAppendOnlyMap.scala:515)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$DiskMapIterator.hasNext(ExternalAppendOnlyMap.scala:535)
        at scala.collection.Iterator$$anon$1.hasNext(Iterator.scala:1004)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$ExternalIterator.org$apache$spark$util$collection$ExternalAppendOnlyMap$ExternalIterator$$readNextHashCode(ExternalAppendOnlyMap.scala:336)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$ExternalIterator$$anonfun$next$1.apply(ExternalAppendOnlyMap.scala:409)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$ExternalIterator$$anonfun$next$1.apply(ExternalAppendOnlyMap.scala:407)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$ExternalIterator.next(ExternalAppendOnlyMap.scala:407)
        at org.apache.spark.util.collection.ExternalAppendOnlyMap$ExternalIterator.next(ExternalAppendOnlyMap.scala:302)
        at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)

0 个答案:

没有答案