当我的spark程序调用JavaSparkContext.stop()时,会发生以下错误。
14/12/11 16:24:19 INFO Main: sc.stop {
14/12/11 16:24:20 ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(cluster02,38918) not found
14/12/11 16:24:20 ERROR SendingConnection: Exception while reading SendingConnection to ConnectionManagerId(cluster04,59659)
java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:252)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:295)
at org.apache.spark.network.SendingConnection.read(Connection.scala:390)
at org.apache.spark.network.ConnectionManager$$anon$6.run(ConnectionManager.scala:205)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
14/12/11 16:24:20 ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(cluster03,59821) not found
14/12/11 16:24:20 ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(cluster02,38918) not found
14/12/11 16:24:20 WARN ConnectionManager: All connections not cleaned up
14/12/11 16:24:20 INFO Main: sc.stop }
我该如何解决这个问题?
配置如下:
更新
当Spark客户端在Linux上运行时,会发生以下错误。 (我认为它的错误基本相同)
14/12/12 11:32:02 INFO Main: sc.stop {
14/12/12 11:32:02 INFO SparkUI: Stopped Spark web UI at http://clientmachine:4040
14/12/12 11:32:02 INFO DAGScheduler: Stopping DAGScheduler
14/12/12 11:32:02 INFO YarnClientSchedulerBackend: Shutting down all executors
14/12/12 11:32:02 INFO YarnClientSchedulerBackend: Asking each executor to shut down
14/12/12 11:32:02 INFO YarnClientSchedulerBackend: Stopped
14/12/12 11:32:03 INFO ConnectionManager: Removing SendingConnection to ConnectionManagerId(cluster04,52869)
14/12/12 11:32:03 INFO ConnectionManager: Removing ReceivingConnection to ConnectionManagerId(cluster04,52869)
14/12/12 11:32:03 ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(cluster04,52869) not found
14/12/12 11:32:03 INFO ConnectionManager: Removing SendingConnection to ConnectionManagerId(cluster03,57334)
14/12/12 11:32:03 INFO ConnectionManager: Removing ReceivingConnection to ConnectionManagerId(cluster03,57334)
14/12/12 11:32:03 ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(cluster03,57334) not found
14/12/12 11:32:03 INFO ConnectionManager: Removing SendingConnection to ConnectionManagerId(cluster02,54205)
14/12/12 11:32:03 INFO ConnectionManager: Removing ReceivingConnection to ConnectionManagerId(cluster02,54205)
14/12/12 11:32:03 ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(cluster02,54205) not found
14/12/12 11:32:03 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!
14/12/12 11:32:03 INFO ConnectionManager: Selector thread was interrupted!
14/12/12 11:32:03 INFO ConnectionManager: Removing ReceivingConnection to ConnectionManagerId(cluster02,54205)
14/12/12 11:32:03 ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(cluster02,54205) not found
14/12/12 11:32:03 INFO ConnectionManager: Removing ReceivingConnection to ConnectionManagerId(cluster04,52869)
14/12/12 11:32:03 ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(cluster04,52869) not found
14/12/12 11:32:03 WARN ConnectionManager: All connections not cleaned up
14/12/12 11:32:03 INFO ConnectionManager: ConnectionManager stopped
14/12/12 11:32:03 INFO MemoryStore: MemoryStore cleared
14/12/12 11:32:03 INFO BlockManager: BlockManager stopped
14/12/12 11:32:03 INFO BlockManagerMaster: BlockManagerMaster stopped
14/12/12 11:32:03 INFO SparkContext: Successfully stopped SparkContext
14/12/12 11:32:03 INFO Main: sc.stop }
答案 0 :(得分:0)
有些线程建议在执行Spark Context'stop'之前放置Thread.sleep。看看是否有帮助。