HDFS写入失败org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.RecoveryInProgressException):无法关闭文件

时间:2015-10-09 05:06:40

标签: java hadoop

我正在尝试用HDFS写一个文件。以下是我的示例代码

URI uri = URI.create(sURI);
            System.setProperty(HADOOP_USER_NAME, grailsApplication.config.hadoop.user.name);
            Configuration conf = new Configuration();
            conf.set(FS_DEFAULT_NAME, grailsApplication.config.fs.default.name);
            conf.set(DFS_REPLICATION, grailsApplication.config.dfs.replication);
            Path path = new Path(uri);
            FileSystem file = FileSystem.get(uri, conf);
            FSDataOutputStream outputStream;
            if (file.exists(path))
                outputStream = file.append(new Path(uri));
            else outputStream = file.create(new Path(uri))


            outputStream.write(request.data.getBytes());
            outputStream.close();

我在下面的例外中得到以下内容。请告知我可能做错了什么。

HDFS write failed org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.RecoveryInProgressException): Failed to close file /EligibilityDataFeederJob/status.txt. Lease recovery is in progress. Try again later.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3071)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2861)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:3145)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:3108)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:598)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:415)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)

1 个答案:

答案 0 :(得分:0)

您的代码中有一个名为outputStream = file.append(new Path(uri));的操作。如果在我们的代码中将复制因子设置为1,则追加操作通常会更好。只需检查您正在使用的复制因子。发生此错误是因为块的副本可能具有不同的生成戳记值。