尝试执行用scala编写的spark程序时出现以下错误。有谁知道这意味着什么?
2015-08-17 15:38:26,218 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-133353882-127.0.1.1-1438188921629 (Datanode Uuid null) service to hadoop-master/192.168.1.62:8020 beginning handshake with NN
2015-08-17 15:38:26,230 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-133353882-127.0.1.1-1438188921629 (Datanode Uuid null) service to hadoop-master/192.168.1.62:8020 successfully registered with NN
2015-08-17 15:38:26,230 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hadoop-master/192.168.1.62:8020 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2015-08-17 15:38:26,271 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-133353882-127.0.1.1-1438188921629 (Datanode Uuid 8534c56b-65ff-476b-8e03-f6fd70fbeab4) service to hadoop-master/192.168.1.62:8020 trying to claim ACTIVE state with txid=35908
2015-08-17 15:38:26,271 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-133353882-127.0.1.1-1438188921629 (Datanode Uuid 8534c56b-65ff-476b-8e03-f6fd70fbeab4) service to hadoop-master/192.168.1.62:8020
2015-08-17 15:38:26,299 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0xb2d92a4e40e, containing 1 storage report(s), of which we sent 1. The reports had 51 total blocks and used 1 RPC(s). This took 3 msec to generate and 25 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2015-08-17 15:38:26,299 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-133353882-127.0.1.1-1438188921629
2015-08-17 15:38:58,683 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-133353882-127.0.1.1-1438188921629:blk_1073744605_3805 src: /192.168.1.64:41006 dest: /192.168.1.64:50010
2015-08-17 15:39:03,785 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/var/data/hadoop/hdfs/dn, DS-c0f8f924-63a2-4e79-a4b2-c34f66e00e60): Scheduling suspect block BP-133353882-127.0.1.1-1438188921629:blk_1073742412_1589 for rescanning.
2015-08-17 15:39:03,787 ERROR org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/var/data/hadoop/hdfs/dn, DS-c0f8f924-63a2-4e79-a4b2-c34f66e00e60) exiting because of exception
java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:539)
at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:619)
2015-08-17 15:39:03,788 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/var/data/hadoop/hdfs/dn, DS-c0f8f924-63a2-4e79-a4b2-c34f66e00e60) exiting.
2015-08-17 15:40:58,578 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.1.64:41006, dest: /192.168.1.64:50010, bytes: 134217728, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_130977062_1, offset: 0, srvID: 8534c56b-65ff-476b-8e03-f6fd70fbeab4, blockid: BP-133353882-127.0.1.1-1438188921629:blk_1073744605_3805, duration: 119724733298
2015-08-17 15:40:58,582 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-133353882-127.0.1.1-1438188921629:blk_1073744605_3805, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2015-08-17 15:40:58,608 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-133353882-127.0.1.1-1438188921629:blk_1073744606_3806 src: /192.168.1.64:41021 dest: /192.168.1.64:50010
2015-08-17 15:41:16,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.1.64:41021, dest: /192.168.1.64:50010, bytes: 134217728, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_130977062_1, offset: 0, srvID: 8534c56b-65ff-476b-8e03-f6fd70fbeab4, blockid: BP-133353882-127.0.1.1-1438188921629:blk_1073744606_3806, duration: 17693650243
2015-08-17 15:41:16,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-133353882-127.0.1.1-1438188921629:blk_1073744606_3806, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2015-08-17 15:41:16,355 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-133353882-127.0.1.1-1438188921629:blk_1073744607_3807 src: /192.168.1.64:41024 dest: /192.168.1.64:50010
2015-08-17 15:41:33,434 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.1.64:41024, dest: /192.168.1.64:50010, bytes: 134217728, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_130977062_1, offset: 0, srvID: 8534c56b-65ff-476b-8e03-f6fd70fbeab4, blockid: BP-133353882-127.0.1.1-1438188921629:blk_1073744607_3807, duration: 17072947884
2015-08-17 15:41:33,434 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-133353882-127.0.1.1-1438188921629:blk_1073744607_3807, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2015-08-17 15:41:33,455 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-133353882-127.0.1.1-1438188921629:blk_1073744608_3808 src: /192.168.1.64:41026 dest: /192.168.1.64:50010
2015-08-17 15:41:50,606 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.1.64:41026, dest: /192.168.1.64:50010, bytes: 134217728, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_130977062_1, offset: 0, srvID: 8534c56b-65ff-476b-8e03-f6fd70fbeab4, blockid: BP-133353882-127.0.1.1-1438188921629:blk_1073744608_3808, duration: 17141980857
2015-08-17 15:41:50,606 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-133353882-127.0.1.1-1438188921629:blk_1073744608_3808, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2015-08-17 15:41:50,686 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-133353882-127.0.1.1-1438188921629:blk_1073744609_3809 src: /192.168.1.64:41029 dest: /192.168.1.64:50010
2015-08-17 15:42:07,050 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.1.64:41029, dest: /192.168.1.64:50010, bytes: 134217728, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_130977062_1, offset: 0, srvID: 8534c56b-65ff-476b-8e03-f6fd70fbeab4, blockid: BP-133353882-127.0.1.1-1438188921629:blk_1073744609_3809, duration: 16356245249
2015-08-17 15:42:07,050 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-133353882-127.0.1.1-1438188921629:blk_1073744609_3809, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2015-08-17 15:42:07,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-133353882-127.0.1.1-1438188921629:blk_1073744610_3810 src: /192.168.1.64:41031 dest: /192.168.1.64:50010
2015-08-17 15:42:24,283 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-133353882-127.0.1.1-1438188921629:blk_1073744610_3810
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:849)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:804)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
at java.lang.Thread.run(Thread.java:745)
2015-08-17 15:42:24,879 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-133353882-127.0.1.1-1438188921629:blk_1073744610_3810, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2015-08-17 15:42:24,879 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-133353882-127.0.1.1-1438188921629:blk_1073744610_3810, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2015-08-17 15:42:24,879 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-133353882-127.0.1.1-1438188921629:blk_1073744610_3810 received exception java.io.IOException: Connection reset by peer
2015-08-17 15:42:25,109 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ubuntu-hadoop-3:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.1.64:41031 dst: /192.168.1.64:50010
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:849)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:804)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
at java.lang.Thread.run(Thread.java:745)
2015-08-17 15:44:30,211 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/var/data/hadoop/hdfs/dn, DS-c0f8f924-63a2-4e79-a4b2-c34f66e00e60): Scheduling suspect block BP-133353882-127.0.1.1-1438188921629:blk_1073744609_3809 for rescanning.
似乎在我有这么好的日子之前,我设法通过排空清理datanode来克服它。但是,我甚至不确定这是什么解决了它,当然这不是一个合适的解决方案。
非常感谢