即使成功部署了流,HDFS中的文件仍为空白

时间:2016-04-19 10:13:29

标签: spring-xd

我在使用Windows7企业版的本地PC上使用springXD 1.3.1,并在Microsoft Azure云上配置了Hortonworks。在Azure的Hortonworks上,我创建了一个dir xd并提供了springxd doc中提到的所需权限。然后我在config / server.yml文件中输入了以下条目:

spring:
  profiles: singlenode
  hadoop:
   fsUri: hdfs://13.92.199.104:8020
   resourceManagerHost: 13.92.199.104
   resourceManagerPort: 8050

此外,我在config / hadoop.properties文件中输入了

fs.default.name=hdfs://13.92.199.104:8020

然后,我通过像xd-singlenode.bat之类的命令启动springxd,然后通过xd-shell.bat启动shell

现在在shell控制台上我运行像

这样的命令
hadoop config fs --namenode hdfs://13.92.199.104:8020

运行命令hadoop fs ls /xd后,得到以下结果:

Found 5 items
drwxrwxrwx   - jitendra.kumar.singh hdfs          0 2016-04-15 15:42 /xd/asdsadasdsad
drwxrwxrwx   - jitendra.kumar.singh hdfs          0 2016-04-15 14:30 /xd/fsd
drwxrwxrwx   - jitendra.kumar.singh hdfs          0 2016-04-19 12:53 /xd/jitendra
drwxrwxrwx   - jitendra.kumar.singh hdfs          0 2016-04-15 14:34 /xd/timeLogHdfs
drwxrwxrwx   - jitendra.kumar.singh hdfs          0 2016-04-19 12:22 /xd/zzzz

这意味着到目前为止一切都很好并且在Azure上使用hadoop环境配置得很好。现在我创建了一个像time | hdfs --fsUri=hdfs://13.92.199.104/这样的流,它已成功部署,并且在Azure上的HDFS中创建了一个文件nnnnn.txt.tmp。直到现在,SpringXD服务器上的一切都很好。现在,我取消部署了流,发现HDFS中的nnnnn.txt.tmp文件中没有写入任何内容,并且在springxd服务器上出现以下错误:

2016-04-19T15:14:49+0530 1.3.1.RELEASE INFO DeploymentsPathChildrenCache-0 container.DeploymentListener - Deploying module 'time' for stream 'nnnnnn'
2016-04-19T15:14:49+0530 1.3.1.RELEASE INFO DeploymentsPathChildrenCache-0 container.DeploymentListener - Deploying module [ModuleDescriptor@709f459 moduleName = 'time', moduleLabe
l = 'time', group = 'nnnnnn', sourceChannelName = [null], sinkChannelName = [null], index = 0, type = source, parameters = map[[empty]], children = list[[empty]]]
2016-04-19T15:14:50+0530 1.3.1.RELEASE INFO DeploymentSupervisor-0 zk.ZKStreamDeploymentHandler - Deployment status for stream 'nnnnnn': DeploymentStatus{state=deployed}
2016-04-19T15:17:22+0530 1.3.1.RELEASE INFO main-EventThread container.DeploymentListener - Undeploying module [ModuleDescriptor@709f459 moduleName = 'time', moduleLabel = 'time',
group = 'nnnnnn', sourceChannelName = [null], sinkChannelName = [null], index = 0, type = source, parameters = map[[empty]], children = list[[empty]]]
2016-04-19T15:17:22+0530 1.3.1.RELEASE INFO main-EventThread container.DeploymentListener - Undeploying module [ModuleDescriptor@16c4ba26 moduleName = 'hdfs', moduleLabel = 'hdfs',
 group = 'nnnnnn', sourceChannelName = [null], sinkChannelName = [null], index = 1, type = sink, parameters = map['fsUri' -> 'hdfs://13.92.199.104/'], children = list[[empty]]]
2016-04-19T15:17:46+0530 1.3.1.RELEASE WARN Thread-19 hdfs.DFSClient - DataStreamer Exception
org.apache.hadoop.ipc.RemoteException: File /xd/nnnnnn/nnnnnn-0.txt.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 n
ode(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1588)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3116)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3040)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:789)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

        at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:na]
        at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:na]
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:na]
        at com.sun.proxy.$Proxy135.addBlock(Unknown Source) ~[na:na]
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) ~[hadoop-hdfs-2.7.1.jar:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_77]
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_77]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_77]
        at java.lang.reflect.Method.invoke(Unknown Source) ~[na:1.8.0_77]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:na]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.1.jar:na]
        at com.sun.proxy.$Proxy136.addBlock(Unknown Source) ~[na:na]
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1430) ~[hadoop-hdfs-2.7.1.jar:na]
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1226) ~[hadoop-hdfs-2.7.1.jar:na]
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) ~[hadoop-hdfs-2.7.1.jar:na]
2016-04-19T15:17:47+0530 1.3.1.RELEASE ERROR main-EventThread output.TextFileWriter - error in close
org.apache.hadoop.ipc.RemoteException: File /xd/nnnnnn/nnnnnn-0.txt.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 n
ode(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1588)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3116)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3040)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:789)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

        at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:na]
        at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:na]
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:na]
        at com.sun.proxy.$Proxy135.addBlock(Unknown Source) ~[na:na]
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) ~[hadoop-hdfs-2.7.1.jar:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_77]
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_77]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_77]
        at java.lang.reflect.Method.invoke(Unknown Source) ~[na:1.8.0_77]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:na]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.1.jar:na]
        at com.sun.proxy.$Proxy136.addBlock(Unknown Source) ~[na:na]
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1430) ~[hadoop-hdfs-2.7.1.jar:na]
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1226) ~[hadoop-hdfs-2.7.1.jar:na]
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) ~[hadoop-hdfs-2.7.1.jar:na]
2016-04-19T15:17:47+0530 1.3.1.RELEASE ERROR main-EventThread outbound.HdfsDataStoreMessageHandler - Error closing writer
org.apache.hadoop.ipc.RemoteException: File /xd/nnnnnn/nnnnnn-0.txt.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 n
ode(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1588)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3116)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3040)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:789)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

        at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:na]
        at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:na]
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:na]
        at com.sun.proxy.$Proxy135.addBlock(Unknown Source) ~[na:na]
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) ~[hadoop-hdfs-2.7.1.jar:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_77]
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_77]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_77]
        at java.lang.reflect.Method.invoke(Unknown Source) ~[na:1.8.0_77]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:na]
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.1.jar:na]
        at com.sun.proxy.$Proxy136.addBlock(Unknown Source) ~[na:na]
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1430) ~[hadoop-hdfs-2.7.1.jar:na]
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1226) ~[hadoop-hdfs-2.7.1.jar:na]
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) ~[hadoop-hdfs-2.7.1.jar:na]
2016-04-19T15:17:47+0530 1.3.1.RELEASE INFO DeploymentsPathChildrenCache-0 container.DeploymentListener - Path cache event: path=/deployments/modules/allocated/6b31fe38-f07d-4f75-a
d60-fd7c56aca843/nnnnnn.source.time.1, type=CHILD_REMOVED
2016-04-19T15:17:47+0530 1.3.1.RELEASE INFO DeploymentsPathChildrenCache-0 container.DeploymentListener - Path cache event: path=/deployments/modules/allocated/6b31fe38-f07d-4f75-a
d60-fd7c56aca843/nnnnnn.sink.hdfs.1, type=CHILD_REMOVED

1 个答案:

答案 0 :(得分:0)

可能是您的文件翻转大小尚未达到。一旦达到指定的大小,您将需要选项--rollover转到新文件。

您可以在此处参阅更多信息:http://docs.spring.io/spring-xd/docs/current-SNAPSHOT/reference/html/#hadoop-hdfs