pyspark

时间:2016-11-14 23:58:14

标签: memory pyspark out

picture from spark UI

我正在使用Spark1.6 我正在运行一个简单的df.show(2)方法并遇到类似

的错误
    An error occurred while calling o143.showString.
    : org.apache.spark.SparkException: Job aborted due to stage      
    failure: Task 6 in stage 6.0 failed 4 times, most recent failure:
     Lost task 6.3 in stage 6.0 
     ExecutorLostFailure (executor 2 exited caused by one of the 
     running tasks) Reason: Slave lost

当我坚持下去时,通过spark UI我看到shuffleWrite内存非常高并且花了很长时间仍然返回错误。 通过一些搜索,我发现这些可能是内存不足的问题。 点击此链接out of memory error Java 我做了重新分配到1000,仍然没那么有用。

我将SparkConf设置为

     conf = (SparkConf().set("spark.driver.maxResultSize", "150g").set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")) 

我的服务器端内存最高可达200​​GB

你有什么好主意可以做到这一点或指向相关链接。 Pyspark将是最有帮助的

以下是来自YARN的错误日志:

     Application application_1477088172315_0118 failed 2 times due to
     AM Container for appattempt_1477088172315_0118_000006 exited 
     with exitCode: 10

     For more detailed output, check application tracking page: Then, 
     click on links to logs of each attempt.
     Diagnostics: Exception from container-launch.
     Container id: container_1477088172315_0118_06_000001
     Exit code: 10
     Stack trace: ExitCodeException exitCode=10:
     at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
     at org.apache.hadoop.util.Shell.run(Shell.java:479)
     at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
     at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
     at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
     at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
     at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 10
Failing this attempt. Failing the application.

以下是笔记本中的错误信息:

Py4JJavaError:调用o71.showString时发生错误。 :org.apache.spark.SparkException:由于阶段失败导致作业中止:阶段15.0中的任务1失败4次,最近失败:阶段15.0中失去的任务1.3():ExecutorLostFailure(执行者26退出由其中一个正在运行的任务导致) )理由:奴隶输了 驱动程序堆栈跟踪:     在org.apache.spark.scheduler.DAGScheduler.org $ apache $ spark $ scheduler $ DAGScheduler $$ failJobAndIndependentStages(DAGScheduler.scala:1431)     在org.apache.spark.scheduler.DAGScheduler $$ anonfun $ abortStage $ 1.apply(DAGScheduler.scala:1419)     在org.apache.spark.scheduler.DAGScheduler $$ anonfun $ abortStage $ 1.apply(DAGScheduler.scala:1418)     在scala.collection.mutable.ResizableArray $ class.foreach(ResizableArray.scala:59)     在scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)     在org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)     在org.apache.spark.scheduler.DAGScheduler $$ anonfun $ handleTaskSetFailed $ 1.apply(DAGScheduler.scala:799)     在org.apache.spark.scheduler.DAGScheduler $$ anonfun $ handleTaskSetFailed $ 1.apply(DAGScheduler.scala:799)     在scala.Option.foreach(Option.scala:236)     在org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)     在org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)     在org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)     在org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)     在org.apache.spark.util.EventLoop $$ anon $ 1.run(EventLoop.scala:48)     在org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)     在org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)     在org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)     在org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)     在org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212)     在org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)     在org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)     在org.apache.spark.sql.DataFrame $$ anonfun $ org $ apache $ spark $ sql $ DataFrame $$执行$ 1 $ 1.apply(DataFrame.scala:1499)     在org.apache.spark.sql.DataFrame $$ anonfun $ org $ apache $ spark $ sql $ DataFrame $$执行$ 1 $ 1.apply(DataFrame.scala:1499)     在org.apache.spark.sql.execution.SQLExecution $ .withNewExecutionId(SQLExecution.scala:56)     at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)     在org.apache.spark.sql.DataFrame.org $ apache $ spark $ sql $ DataFrame $$执行$ 1(DataFrame.scala:1498)     在org.apache.spark.sql.DataFrame.org $ apache $ spark $ sql $ DataFrame $$ collect(DataFrame.scala:1505)     在org.apache.spark.sql.DataFrame $$ anonfun $ head $ 1.apply(DataFrame.scala:1375)     在org.apache.spark.sql.DataFrame $$ anonfun $ head $ 1.apply(DataFrame.scala:1374)     在org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)     在org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)     在org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)     在org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)     at java.lang.reflect.Method.invoke(Method.java:498)     在py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)     在py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)     在py4j.Gateway.invoke(Gateway.java:259)     at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)     在py4j.commands.CallCommand.execute(CallCommand.java:79)     在py4j.GatewayConnection.run(GatewayConnection.java:209)     在java.lang.Thread.run(Thread.java:745)

谢谢

0 个答案:

没有答案