尝试使用以下配置在Apache NiFi1.2.1上使用PutHDFS处理器时;
hadoop configuration reource : /usr/local/hadoop-2.7.0/etc/hadoop/core-site.xml, /usr/local/hadoop-2.7.0/etc/hadoop/hdfs-site.xml
directory: /mydir
我遇到了以下错误。
Caused by: org.apache.hadoop.ipc.RemoteException: File /tweets/.381623121831518.json could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3067)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:722)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
答案 0 :(得分:0)
我按照以下程序纠正了这个问题;
停止所有服务
$ cd $HADOOP_HOME
$ sbin/stop-all.sh
删除hdfs-site.xml中提到的namenode和datanode目录
$rm datanode
$rm namenode
格式化名称节目
hadoop namenode -format
启动所有hadoop服务
$sbin/start-all.sh
检查所有正在运行的服务
bash-3.2# jps
61488 ResourceManager
57128 RunNiFi
61160 NameNode
61256 DataNode
57129 NiFi
61609 Jps
61371 SecondaryNameNode
61582 NodeManager
检查在PutHDFS处理器中指定的/ mydir中传输的文件 - >目的地目录。应该在此目录中传输文件
$ hdfs dfs -ls /mydir