Spark完成工作,但RStudio显示GC开销异常

时间:2019-07-04 13:18:42

标签: apache-spark bigdata parquet sparklyr rstudio-server

我们很新,可以通过Sparklyr从RStudio提交Spark作业。我们在Spark History中看到了很长的大型工作,但是RStudio显示了Java GC开销异常。我们有大量的数据集。

我们是否需要增加RStudio Java的Sparklyr内存?如果是这样,怎么做?我们已经尝试过:

options(java.parameters = "-Xmx32g")

在Spark中完成作业后,我们在RStudio中遇到的异常是:

Error: java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.lang.StringCoding.decode(StringCoding.java:215)
        at java.lang.String.<init>(String.java:463)
        at java.lang.String.<init>(String.java:515)
        at org.apache.spark.unsafe.types.UTF8String.toString(UTF8String.java:1213)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.createExternalRow_0_0$(Unknown Source)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown Source)
        at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$collectFromPlan$1.apply(Dataset.scala:3276)
        at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$collectFromPlan$1.apply(Dataset.scala:3273)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
        at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3273)
        at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2722)
        at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2722)
        at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3254)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3253)
        at org.apache.spark.sql.Dataset.collect(Dataset.scala:2722)
        at sparklyr.Utils$.collect(utils.scala:200)
        at sparklyr.Utils.collect(utils.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at sparklyr.Invoke.invoke(invoke.scala:139)
        at sparklyr.StreamHandler.handleMethodCall(stream.scala:123)
        at sparklyr.StreamHandler.read(stream.scala:66)
        at sparklyr.BackendHandler.channelRead0(handler.scala:51)
        at sparklyr.BackendHandler.channelRead0(handler.scala:4)

0 个答案:

没有答案