我已在群集上配置hadoop
和spark
,并使用10.1.10.101
作为主节点,10.1.10.102 ~ 110
作为从属节点。这里我写了一个简单的python脚本
from pyspark.conf import SparkConf
from pyspark.context import SparkContext
sc = SparkContext(conf=SparkConf().setAppName("test"))
def inside(p):
x, y = random.random(), random.random()
return x*x + y*y < 1
count = sc.parallelize(xrange(0, NUM_SAMPLES)) \
.filter(inside).count()
并使用spark-submit
提交,如下所示
spark-submit --master yarn /home/test/pyspark_script.py
其中pyspark_scirpt.py
是上面给出的python文件。然后我得到了以下冗长的输出,其中出现了几个ERROR
和WARN
。
18/03/05 21:01:17 INFO spark.SparkContext: Running Spark version 2.1.0
18/03/05 21:01:17 INFO spark.SecurityManager: Changing view acls to: hadoop
18/03/05 21:01:17 INFO spark.SecurityManager: Changing modify acls to: hadoop
18/03/05 21:01:17 INFO spark.SecurityManager: Changing view acls groups to:
18/03/05 21:01:17 INFO spark.SecurityManager: Changing modify acls groups to:
18/03/05 21:01:17 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
18/03/05 21:01:17 INFO util.Utils: Successfully started service 'sparkDriver' on port 35615.
18/03/05 21:01:17 INFO spark.SparkEnv: Registering MapOutputTracker
18/03/05 21:01:17 INFO spark.SparkEnv: Registering BlockManagerMaster
18/03/05 21:01:17 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/03/05 21:01:17 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
18/03/05 21:01:17 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-2299fc0c-6efe-47e1-9bb5-7e1ca7aa49a1
18/03/05 21:01:17 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
18/03/05 21:01:18 INFO spark.SparkEnv: Registering OutputCommitCoordinator
18/03/05 21:01:18 INFO util.log: Logging initialized @2328ms
18/03/05 21:01:18 INFO server.Server: jetty-9.2.z-SNAPSHOT
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@69e220f7{/jobs,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3f754bab{/jobs/json,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2b644d36{/jobs/job,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3c6a49f3{/jobs/job/json,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@44f73311{/stages,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@514a0037{/stages/json,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@58ceead5{/stages/stage,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@368515ee{/stages/stage/json,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4b17c794{/stages/pool,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4add2d79{/stages/pool/json,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@656afeb5{/storage,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7b5ebd93{/storage/json,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@e00fe0b{/storage/rdd,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2d15ac57{/storage/rdd/json,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1fb87016{/environment,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1184f457{/environment/json,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@187da0ca{/executors,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@180ac086{/executors/json,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@446e3b51{/executors/threadDump,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@727e59c7{/executors/threadDump/json,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7f0dcb2{/static,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@49b0323a{/,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7c312cee{/api,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7c01e2ce{/jobs/job/kill,null,AVAILABLE}
18/03/05 21:01:18 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7d15b4b0{/stages/stage/kill,null,AVAILABLE}
18/03/05 21:01:18 INFO server.ServerConnector: Started ServerConnector@1307dfca{HTTP/1.1}{0.0.0.0:4040}
18/03/05 21:01:18 INFO server.Server: Started @2433ms
18/03/05 21:01:18 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
18/03/05 21:01:18 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.1.10.101:4040
18/03/05 21:01:18 INFO client.RMProxy: Connecting to ResourceManager at hadoop-datanode101.zipeiyi.corp/10.1.10.101:8032
18/03/05 21:01:19 INFO yarn.Client: Requesting a new application from cluster with 10 NodeManagers
18/03/05 21:01:19 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
18/03/05 21:01:19 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
18/03/05 21:01:19 INFO yarn.Client: Setting up container launch context for our AM
18/03/05 21:01:19 INFO yarn.Client: Setting up the launch environment for our AM container
18/03/05 21:01:19 INFO yarn.Client: Preparing resources for our AM container
18/03/05 21:01:20 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
18/03/05 21:01:21 INFO yarn.Client: Uploading resource file:/tmp/spark-a93238f0-37a0-4e2f-a9b3-0857b87af379/__spark_libs__7062584398451840567.zip -> hdfs://hadoop-datanode101.zipeiyi.corp:8020/user/hadoop/.sparkStaging/application_1520253963057_0003/__spark_libs__7062584398451840567.zip
18/03/05 21:01:23 INFO yarn.Client: Uploading resource file:/app/zpy/spark/python/lib/pyspark.zip -> hdfs://hadoop-datanode101.zipeiyi.corp:8020/user/hadoop/.sparkStaging/application_1520253963057_0003/pyspark.zip
18/03/05 21:01:23 INFO yarn.Client: Uploading resource file:/app/zpy/spark/python/lib/py4j-0.10.4-src.zip -> hdfs://hadoop-datanode101.zipeiyi.corp:8020/user/hadoop/.sparkStaging/application_1520253963057_0003/py4j-0.10.4-src.zip
18/03/05 21:01:23 INFO yarn.Client: Uploading resource file:/tmp/spark-a93238f0-37a0-4e2f-a9b3-0857b87af379/__spark_conf__4059920525961811317.zip -> hdfs://hadoop-datanode101.zipeiyi.corp:8020/user/hadoop/.sparkStaging/application_1520253963057_0003/__spark_conf__.zip
18/03/05 21:01:23 INFO spark.SecurityManager: Changing view acls to: hadoop
18/03/05 21:01:23 INFO spark.SecurityManager: Changing modify acls to: hadoop
18/03/05 21:01:23 INFO spark.SecurityManager: Changing view acls groups to:
18/03/05 21:01:23 INFO spark.SecurityManager: Changing modify acls groups to:
18/03/05 21:01:23 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
18/03/05 21:01:23 INFO yarn.Client: Submitting application application_1520253963057_0003 to ResourceManager
18/03/05 21:01:23 INFO impl.YarnClientImpl: Submitted application application_1520253963057_0003
18/03/05 21:01:23 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1520253963057_0003 and attemptId None
18/03/05 21:01:24 INFO yarn.Client: Application report for application_1520253963057_0003 (state: ACCEPTED)
18/03/05 21:01:24 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1520254883772
final status: UNDEFINED
tracking URL: http://hadoop-datanode101.zipeiyi.corp:8088/proxy/application_1520253963057_0003/
user: hadoop
18/03/05 21:01:25 INFO yarn.Client: Application report for application_1520253963057_0003 (state: ACCEPTED)
18/03/05 21:01:26 INFO yarn.Client: Application report for application_1520253963057_0003 (state: ACCEPTED)
18/03/05 21:01:27 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
18/03/05 21:01:27 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> hadoop-datanode101.zipeiyi.corp, PROXY_URI_BASES -> http://hadoop-datanode101.zipeiyi.corp:8088/proxy/application_1520253963057_0003), /proxy/application_1520253963057_0003
18/03/05 21:01:27 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
18/03/05 21:01:27 INFO yarn.Client: Application report for application_1520253963057_0003 (state: ACCEPTED)
18/03/05 21:01:28 INFO yarn.Client: Application report for application_1520253963057_0003 (state: RUNNING)
18/03/05 21:01:28 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 10.1.10.101
ApplicationMaster RPC port: 0
queue: default
start time: 1520254883772
final status: UNDEFINED
tracking URL: http://hadoop-datanode101.zipeiyi.corp:8088/proxy/application_1520253963057_0003/
user: hadoop
18/03/05 21:01:28 INFO cluster.YarnClientSchedulerBackend: Application application_1520253963057_0003 has started running.
18/03/05 21:01:28 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44883.
18/03/05 21:01:28 INFO netty.NettyBlockTransferService: Server created on 10.1.10.101:44883
18/03/05 21:01:28 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/03/05 21:01:28 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.1.10.101, 44883, None)
18/03/05 21:01:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.1.10.101:44883 with 366.3 MB RAM, BlockManagerId(driver, 10.1.10.101, 44883, None)
18/03/05 21:01:28 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.1.10.101, 44883, None)
18/03/05 21:01:28 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.1.10.101, 44883, None)
18/03/05 21:01:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1893d602{/metrics/json,null,AVAILABLE}
18/03/05 21:01:32 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
18/03/05 21:01:32 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> hadoop-datanode101.zipeiyi.corp, PROXY_URI_BASES -> http://hadoop-datanode101.zipeiyi.corp:8088/proxy/application_1520253963057_0003), /proxy/application_1520253963057_0003
18/03/05 21:01:32 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
18/03/05 21:01:34 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.1.10.101:55212) with ID 1
18/03/05 21:01:34 INFO storage.BlockManagerMasterEndpoint: Registering block manager hadoop-datanode101.zipeiyi.corp:39278 with 366.3 MB RAM, BlockManagerId(1, hadoop-datanode101.zipeiyi.corp, 39278, None)
18/03/05 21:01:35 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (10.1.10.101:55216) with ID 2
18/03/05 21:01:35 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
18/03/05 21:01:35 INFO storage.BlockManagerMasterEndpoint: Registering block manager hadoop-datanode101.zipeiyi.corp:41472 with 366.3 MB RAM, BlockManagerId(2, hadoop-datanode101.zipeiyi.corp, 41472, None)
Traceback (most recent call last):
File "/app/zpy/test/test.py", line 14, in <module>
count = sc.parallelize(xrange(0, NUM_SAMPLES)) \
NameError: name 'xrange' is not defined
18/03/05 21:01:35 INFO spark.SparkContext: Invoking stop() from shutdown hook
18/03/05 21:01:35 INFO server.ServerConnector: Stopped ServerConnector@1307dfca{HTTP/1.1}{0.0.0.0:4040}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7d15b4b0{/stages/stage/kill,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7c01e2ce{/jobs/job/kill,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7c312cee{/api,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@49b0323a{/,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7f0dcb2{/static,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@727e59c7{/executors/threadDump/json,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@446e3b51{/executors/threadDump,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@180ac086{/executors/json,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@187da0ca{/executors,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1184f457{/environment/json,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1fb87016{/environment,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2d15ac57{/storage/rdd/json,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@e00fe0b{/storage/rdd,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7b5ebd93{/storage/json,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@656afeb5{/storage,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4add2d79{/stages/pool/json,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4b17c794{/stages/pool,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@368515ee{/stages/stage/json,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@58ceead5{/stages/stage,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@514a0037{/stages/json,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@44f73311{/stages,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@3c6a49f3{/jobs/job/json,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2b644d36{/jobs/job,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@3f754bab{/jobs/json,null,UNAVAILABLE}
18/03/05 21:01:35 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@69e220f7{/jobs,null,UNAVAILABLE}
18/03/05 21:01:35 INFO ui.SparkUI: Stopped Spark web UI at http://10.1.10.101:4040
18/03/05 21:01:35 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
18/03/05 21:01:35 ERROR client.TransportClient: Failed to send RPC 4708722597800291410 to /10.1.10.101:55202: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
18/03/05 21:01:35 ERROR cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Sending RequestExecutors(0,0,Map()) to AM was unsuccessful
java.io.IOException: Failed to send RPC 4708722597800291410 to /10.1.10.101:55202: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:249)
at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:233)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:514)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:488)
at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:438)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:455)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
18/03/05 21:01:35 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
18/03/05 21:01:35 ERROR util.Utils: Uncaught exception in thread Thread-5
org.apache.spark.SparkException: Exception thrown in awaitResult
at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:512)
at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:93)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:151)
at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:467)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1588)
at org.apache.spark.SparkContext$$anonfun$stop$8.apply$mcV$sp(SparkContext.scala:1826)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1283)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1825)
at org.apache.spark.SparkContext$$anonfun$2.apply$mcV$sp(SparkContext.scala:581)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
Caused by: java.io.IOException: Failed to send RPC 4708722597800291410 to /10.1.10.101:55202: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:249)
at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:233)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:514)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:488)
at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:438)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:455)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
18/03/05 21:01:35 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/03/05 21:01:35 INFO memory.MemoryStore: MemoryStore cleared
18/03/05 21:01:35 INFO storage.BlockManager: BlockManager stopped
18/03/05 21:01:35 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
18/03/05 21:01:35 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/03/05 21:01:35 INFO spark.SparkContext: Successfully stopped SparkContext
18/03/05 21:01:35 INFO util.ShutdownHookManager: Shutdown hook called
18/03/05 21:01:35 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-a93238f0-37a0-4e2f-a9b3-0857b87af379
18/03/05 21:01:35 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-a93238f0-37a0-4e2f-a9b3-0857b87af379/pyspark-15d826a4-2617-4f3f-a93c-cdab487bbe56
这些错误意味着什么,我该如何解决?跟踪此问题的一些建议将非常有用。