最近我讨论了Talend Bigdata的工作。我写了一些正在执行的工作。每次运行时要处理的数据都小于一个gb,但它是.gz文件。 Job正在成功运行,但我经常遇到以下错误,尽管在没有任何更改的情况下重新处理作业时它就成功了。执行程序内存和内存开销按预期设置。 GC收集器也设置为G1GC。 一个可以帮助我的想法。如果您了解更多信息,请告诉我。
ERROR executor.CoarseGrainedExecutorBackend: Driver 131.116.127.73:45245 disassociated! Shutting down.
[ERROR]: org.apache.spark.scheduler.LiveListenerBus - Listener SQLListener threw an exception
java.lang.NullPointerException
at org.apache.spark.sql.execution.ui.SQLListener.onTaskEnd(SQLListener.scala:167)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:42)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
[WARN ]: org.apache.spark.scheduler.TaskSetManager - Lost task 16.0 in stage 7.1 (TID 30308, dl200dn42.ddc.teliasonera.net): TaskKilled (killed intentionally)
[ERROR]: org.apache.spark.network.server.TransportRequestHandler - Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:580)
at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:175)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
[INFO ]: org.apache.spark.storage.MemoryStore - MemoryStore cleared
[INFO ]: org.apache.spark.storage.BlockManager - BlockManager stopped
[INFO ]: org.apache.spark.storage.BlockManagerMaster - BlockManagerMaster stopped
[INFO ]: org.apache.spark.scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint - OutputCommitCoordinator stopped!
[WARN ]: org.apache.spark.rpc.netty.Dispatcher - Message RemoteProcessDisconnected(dl200dn25.ddc.teliasonera.net:57604) dropped. RpcEnv already stopped.
[WARN ]: org.apache.spark.rpc.netty.Dispatcher - Message RemoteProcessDisconnected(dl200dn25.ddc.teliasonera.net:57604) dropped. RpcEnv already stopped.
[WARN ]: org.apache.spark.rpc.netty.Dispatcher - Message RemoteProcessDisconnected(dl200dn28.ddc.teliasonera.net:47162) dropped. RpcEnv already stopped.
[WARN ]: org.apache.spark.rpc.netty.Dispatcher - Message RemoteProcessDisconnected(dl200dn42.ddc.teliasonera.net:58644) dropped. RpcEnv already stopped.
[WARN ]: org.apache.spark.rpc.netty.Dispatcher - Message RemoteProcessDisconnected(dl200dn30.ddc.teliasonera.net:39768) dropped. RpcEnv already stopped.
[WARN ]: org.apache.spark.rpc.netty.Dispatcher - Message RemoteProcessDisconnected(dl200dn28.ddc.teliasonera.net:47162) dropped. RpcEnv already stopped.
[WARN ]: org.apache.spark.rpc.netty.Dispatcher - Message RemoteProcessDisconnected(dl200dn30.ddc.teliasonera.net:39768) dropped. RpcEnv already stopped.
[WARN ]: org.apache.spark.rpc.netty.Dispatcher - Message RemoteProcessDisconnected(dl200dn42.ddc.teliasonera.net:58644) dropped. RpcEnv already stopped.
[INFO ]: org.apache.spark.SparkContext - Successfully stopped SparkContext
[ERROR]: project_cdl_tenant.job_cdl_010_sourcename_abcd_rawtoaccess_0_1.job_CDL_010_sourcename_abcd_RawToAccess - TalendJob: 'job_CDL_010_sourcename_abcd_RawToAccess' - Failed with exit code: 1.
[INFO ]: org.apache.spark.util.ShutdownHookManager - Shutdown hook called
[INFO ]: org.apache.spark.util.ShutdownHookManager - Deleting directory /tmp/spark-afeaa959-3f23-4ffc-a5b4-eee7ed9838c0
[INFO ]: org.apache.spark.util.ShutdownHookManager - Deleting directory /tmp/spark-a5e1f6a5-7308-4441-ab95-1f1aab023c8a
[INFO ]: akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
[INFO ]: akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
[INFO ]: org.apache.zookeeper.ZooKeeper - Session: 0x162b8dd2d3c2595 closed
[INFO ]: org.apache.zookeeper.ClientCnxn - EventThread shut down
[INFO ]: CuratorFrameworkSingleton - Closing ZooKeeper client.
[ERROR]: project_cdl_tenant.job_000_sourcename_abcd_rawtobase_access_childload_0_1.job_000_sourcename_abcd_RawToBase_Access_ChildLoad - tRunJob_1 - Child job returns 1. It doesn't terminate normally.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/talend/6.4.1/jobserver_fin/agent/TalendJobServersFiles/repository/PROJECT_CDL_TENANT_job_000_sourcename_abcd_RawToBase_Access_ChildLoad_20180412_193439_A6T2O/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/talend/6.4.1/jobserver_fin/agent/TalendJobServersFiles/repository/PROJECT_CDL_TENANT_job_000_sourcename_abcd_RawToBase_Access_ChildLoad_20180412_193439_A6T2O/lib/talend-spark-assembly-1.6.0-cdh5.8.1-hadoop2.6.0-cdh5.8.1-with-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:156)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:53)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139)
at project_cdl_tenant.job_cdl_010_sourcename_abcd_rawtoaccess_0_1.job_CDL_010_sourcename_abcd_RawToAccess.tHiveInput_1Process(job_CDL_010_sourcename_abcd_RawToAccess.java:2365)
at project_cdl_tenant.job_cdl_010_sourcename_abcd_rawtoaccess_0_1.job_CDL_010_sourcename_abcd_RawToAccess.tAvroInput_1_InputFormatAvroProcess(job_CDL_010_sourcename_abcd_RawToAccess.java:3718)
at project_cdl_tenant.job_cdl_010_sourcename_abcd_rawtoaccess_0_1.job_CDL_010_sourcename_abcd_RawToAccess.run(job_CDL_010_sourcename_abcd_RawToAccess.java:4070)
at project_cdl_tenant.job_cdl_010_sourcename_abcd_rawtoaccess_0_1.job_CDL_010_sourcename_abcd_RawToAccess.runJobInTOS(job_CDL_010_sourcename_abcd_RawToAccess.java:3901)
at project_cdl_tenant.job_cdl_010_sourcename_abcd_rawtoaccess_0_1.job_CDL_010_sourcename_abcd_RawToAccess.main(job_CDL_010_sourcename_abcd_RawToAccess.java:3805)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 7.1 failed 4 times, most recent failure: Lost task 0.3 in stage 7.1 (TID 30301, dl200dn42.ddc.teliasonera.net): java.io.FileNotFoundException: /data/disk11/yarn/nm/usercache/prodfinbatch01/appcache/application_1523529055210_30771/blockmgr-23238b36-41e7-4192-9af1-4cbe68cefcf3/1e/temp_shuffle_57543b9a-3fe3-4692-9cf9-40d1f89b8455 (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:102)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1843)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1856)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1933)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:150)
... 21 more
Caused by: java.io.FileNotFoundException: /data/disk11/yarn/nm/usercache/prodfinbatch01/appcache/application_1523529055210_30771/blockmgr-23238b36-41e7-4192-9af1-4cbe68cefcf3/1e/temp_shuffle_57543b9a-3fe3-4692-9cf9-40d1f89b8455 (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:102)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:156)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:53)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139)
at project_cdl_tenant.job_cdl_010_sourcename_abcd_rawtoaccess_0_1.job_CDL_010_sourcename_abcd_RawToAccess.tHiveInput_1Process(job_CDL_010_sourcename_abcd_RawToAccess.java:2365)
at project_cdl_tenant.job_cdl_010_sourcename_abcd_rawtoaccess_0_1.job_CDL_010_sourcename_abcd_RawToAccess.tAvroInput_1_InputFormatAvroProcess(job_CDL_010_sourcename_abcd_RawToAccess.java:3718)
at project_cdl_tenant.job_cdl_010_sourcename_abcd_rawtoaccess_0_1.job_CDL_010_sourcename_abcd_RawToAccess.run(job_CDL_010_sourcename_abcd_RawToAccess.java:4070)
at project_cdl_tenant.job_cdl_010_sourcename_abcd_rawtoaccess_0_1.job_CDL_010_sourcename_abcd_RawToAccess.runJobInTOS(job_CDL_010_sourcename_abcd_RawToAccess.java:3901)
at project_cdl_tenant.job_cdl_010_sourcename_abcd_rawtoaccess_0_1.job_CDL_010_sourcename_abcd_RawToAccess.main(job_CDL_010_sourcename_abcd_RawToAccess.java:3805)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 7.1 failed 4 times, most recent failure: Lost task 0.3 in stage 7.1 (TID 30301, dl200dn42.ddc.teliasonera.net): java.io.FileNotFoundException:
答案 0 :(得分:0)
我可以看到该错误消息也包含
dbl
检查您是否正在读写相同的路径。
或者可能由于块丢失而导致了这种情况。
尝试重新运行。