我正在编写一个更新cassandra表的程序。
我写了第一张照片,我从rdd到地图逐行更新表格。
现在我想使用与this thread中相同的语法构建一批更新:
但是只要我使用 mapPartition ,我就会得到一个
Key already cancelled error.
程序正确更新表,但似乎在驱动程序尝试关闭资源时出现问题。
15/01/26 00:07:00 INFO SparkContext: Job finished: collect at CustomerIdReconciliation.scala:143, took 1.998601568 s
15/01/26 00:07:00 INFO SparkUI: Stopped Spark web UI at http://cim1-dev:4044
15/01/26 00:07:00 INFO DAGScheduler: Stopping DAGScheduler
15/01/26 00:07:00 INFO SparkDeploySchedulerBackend: Shutting down all executors
15/01/26 00:07:00 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
15/01/26 00:07:00 INFO ConnectionManager: Removing SendingConnection to ConnectionManagerId(cim1-dev2,52516)
15/01/26 00:07:00 INFO ConnectionManager: Removing ReceivingConnection to ConnectionManagerId(cim1-dev2,52516)
15/01/26 00:07:00 ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(cim1-dev2,52516) not found
15/01/26 00:07:00 INFO ConnectionManager: Key not valid ? sun.nio.ch.SelectionKeyImpl@7cedcb23
15/01/26 00:07:00 INFO ConnectionManager: key already cancelled ? sun.nio.ch.SelectionKeyImpl@7cedcb23
java.nio.channels.CancelledKeyException
at org.apache.spark.network.ConnectionManager.run(ConnectionManager.scala:386)
at org.apache.spark.network.ConnectionManager$$anon$4.run(ConnectionManager.scala:139)
15/01/26 00:07:00 INFO ConnectionManager: Key not valid ? sun.nio.ch.SelectionKeyImpl@38e8c534
15/01/26 00:07:00 INFO ConnectionManager: key already cancelled ? sun.nio.ch.SelectionKeyImpl@38e8c534 java.nio.channels.CancelledKeyException
at org.apache.spark.network.ConnectionManager.run(ConnectionManager.scala:310)
at org.apache.spark.network.ConnectionManager$$anon$4.run(ConnectionManager.scala:139)
15/01/26 00:07:00 INFO ConnectionManager: Removing SendingConnection to ConnectionManagerId(cim1-dev,44773)
15/01/26 00:07:00 INFO ConnectionManager: Removing ReceivingConnection to ConnectionManagerId(cim1-dev3,29293)
15/01/26 00:07:00 INFO ConnectionManager: Removing SendingConnection to ConnectionManagerId(cim1-dev3,29293)
15/01/26 00:07:00 INFO ConnectionManager: Key not valid ? sun.nio.ch.SelectionKeyImpl@159adcf5
15/01/26 00:07:00 INFO ConnectionManager: key already cancelled ? sun.nio.ch.SelectionKeyImpl@159adcf5 java.nio.channels.CancelledKeyException
at org.apache.spark.network.ConnectionManager.run(ConnectionManager.scala:386)
at org.apache.spark.network.ConnectionManager$$anon$4.run(ConnectionManager.scala:139)
15/01/26 00:07:00 INFO ConnectionManager: Removing ReceivingConnection to ConnectionManagerId(cim1-dev,44773)
15/01/26 00:07:00 ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(cim1-dev,44773) not found
15/01/26 00:07:00 INFO ConnectionManager: Key not valid? sun.nio.ch.SelectionKeyImpl@329a6d86
15/01/26 00:07:00 INFO ConnectionManager: key already cancelled ? sun.nio.ch.SelectionKeyImpl@329a6d86 java.nio.channels.CancelledKeyException
at org.apache.spark.network.ConnectionManager.run(ConnectionManager.scala:310)
at org.apache.spark.network.ConnectionManager$$anon$4.run(ConnectionManager.scala:139)
15/01/26 00:07:00 INFO ConnectionManager: Key not valid ? sun.nio.ch.SelectionKeyImpl@3d3e86d5
15/01/26 00:07:00 INFO ConnectionManager: key already cancelled ? sun.nio.ch.SelectionKeyImpl@3d3e86d5 java.nio.channels.CancelledKeyException
at org.apache.spark.network.ConnectionManager.run(ConnectionManager.scala:310)
at org.apache.spark.network.ConnectionManager$$anon$4.run(ConnectionManager.scala:139)
15/01/26 00:07:01 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!
15/01/26 00:07:01 INFO ConnectionManager: Selector thread was interrupted!
15/01/26 00:07:01 INFO ConnectionManager: ConnectionManager stopped
15/01/26 00:07:01 INFO MemoryStore: MemoryStore cleared
15/01/26 00:07:01 INFO BlockManager: BlockManager stopped
15/01/26 00:07:01 INFO BlockManagerMaster: BlockManagerMaster stopped
15/01/26 00:07:01 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/01/26 00:07:01 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/01/26 00:07:01 INFO SparkContext: Successfully stopped SparkContext
我尝试设置这两个选项,但它不会改变任何内容:
set("spark.core.connection.ack.wait.timeout","600")
set("spark.akka.frameSize","50")
感谢您的帮助。