Spark结构化流-java.io.IOException:DestHost:destPort,LocalHost:localPort。本地异常失败:java.io.IOException

时间:2019-07-25 12:58:22

标签: apache-spark apache-spark-sql yarn spark-structured-streaming

我在Spark structured streaming集群中有一个Yarn工作。到目前为止,这项工作一直很好。但是我突然遇到了这个错误;

java.io.IOException: DestHost:destPort host:port , LocalHost:localPort host:port. Failed on local exception: java.io.IOException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501)
    at org.apache.hadoop.ipc.Client.call(Client.java:1443)
    at org.apache.hadoop.ipc.Client.call(Client.java:1353)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
    at com.sun.proxy.$Proxy15.getListing(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:667)
    at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
    at com.sun.proxy.$Proxy16.getListing(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1641)
    at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1625)
    at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:1055)
    at org.apache.hadoop.hdfs.DistributedFileSystem.access$1000(DistributedFileSystem.java:131)
    at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1119)
    at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1116)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:1126)
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.fetchFiles(HDFSBackedStateStoreProvider.scala:579)
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.cleanup(HDFSBackedStateStoreProvider.scala:526)
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.doMaintenance(HDFSBackedStateStoreProvider.scala:224)
    at org.apache.spark.sql.execution.streaming.state.StateStore$$anonfun$org$apache$spark$sql$execution$streaming$state$StateStore$$doMaintenance$2.apply(StateStore.scala:425)
    at org.apache.spark.sql.execution.streaming.state.StateStore$$anonfun$org$apache$spark$sql$execution$streaming$state$StateStore$$doMaintenance$2.apply(StateStore.scala:422)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.sql.execution.streaming.state.StateStore$.org$apache$spark$sql$execution$streaming$state$StateStore$$doMaintenance(StateStore.scala:422)
    at org.apache.spark.sql.execution.streaming.state.StateStore$$anonfun$startMaintenanceIfNeeded$1.apply$mcV$sp(StateStore.scala:406)
    at org.apache.spark.sql.execution.streaming.state.StateStore$MaintenanceTask$$anon$1.run(StateStore.scala:322)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException
    at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1031)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1062)
Caused by: java.lang.InterruptedException
    ... 2 more
19/07/25 15:37:59 INFO StateStore: Env is not null
19/07/25 15:37:59 INFO StateStore: Retrieved reference to StateStoreCoordinator: org.apache.spark.sql.execution.streaming.state.StateStoreCoordinatorRef@4ace356c
19/07/25 15:37:59 INFO StateStore: Unloaded HDFSStateStoreProvider[id = (op=0,part=8),dir = hdfs://host:port/
19/07/25 15:37:59 INFO StateStore: Env is not null
19/07/25 15:37:59 INFO StateStore: Retrieved reference to StateStoreCoordinator: org.apache.spark.sql.execution.streaming.state.StateStoreCoordinatorRef@4ace356c
19/07/25 15:37:59 WARN HDFSBackedStateStoreProvider: Error doing snapshots for HDFSStateStoreProvider[id = (op=1,part=6),dir = hdfs://host:port/location
java.io.IOException: Filesystem closed
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:473)
    at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1639)
    at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1625)
    at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:1055)
    at org.apache.hadoop.hdfs.DistributedFileSystem.access$1000(DistributedFileSystem.java:131)
    at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1119)
    at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1116)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:1126)
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.fetchFiles(HDFSBackedStateStoreProvider.scala:579)
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.doSnapshot(HDFSBackedStateStoreProvider.scala:498)
    at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.doMaintenance(HDFSBackedStateStoreProvider.scala:223)
    at org.apache.spark.sql.execution.streaming.state.StateStore$$anonfun$org$apache$spark$sql$execution$streaming$state$StateStore$$doMaintenance$2.apply(StateStore.scala:425)
    at org.apache.spark.sql.execution.streaming.state.StateStore$$anonfun$org$apache$spark$sql$execution$streaming$state$StateStore$$doMaintenance$2.apply(StateStore.scala:422)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.sql.execution.streaming.state.StateStore$.org$apache$spark$sql$execution$streaming$state$StateStore$$doMaintenance(StateStore.scala:422)
    at org.apache.spark.sql.execution.streaming.state.StateStore$$anonfun$startMaintenanceIfNeeded$1.apply$mcV$sp(StateStore.scala:406)
    at org.apache.spark.sql.execution.streaming.state.StateStore$MaintenanceTask$$anon$1.run(StateStore.scala:322)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

我不知道为什么会出现此错误。

我的工作配置:

- executor-cores : 1
- executor-memory: 1G
- driver-memory: 2G
- number-of-executors: 3

我正在与Yarn合作cluster deploy mode。作业最初工作1-2个小时,但是由于最大执行程序错误(默认值为6),因此工作再次失败。我该如何解决这个问题?

编辑: 另外,我现在得到了以上错误的这些错误;

org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval

java.lang.NoSuchMethodException: java.nio.channels.ClosedByInterruptException.<init>(java.lang.String)

修改2: 我正在发出以下警告:

WARN TaskMemoryManager: Failed to allocate a page (16777216 bytes), try again.

0 个答案:

没有答案