使用RPC

时间:2016-09-29 07:13:09

标签: java hadoop hdfs

这是我的构建配置方法

private Configuration buildConfiguration() {
        Configuration conf = new Configuration();

        if (connectivityDetail.isSecureMode()) {
            conf.set("hadoop.security.authentication", "kerberos");
            conf.set("hadoop.http.authentication.type", "kerberos");
            conf.set("dfs.namenode.kerberos.principal", connectivityDetail.getHdfsServicePrincipal());
        }

        if (isHAEnabled()) {
            String hdfsServiceName = connectivityDetail.getHdfsServiceName();

            conf.set("fs.defaultFS", "hdfs://" + hdfsServiceName);
            conf.set("dfs.ha.namenodes." + hdfsServiceName, "nn0,nn1");
            conf.set("dfs.nameservices", hdfsServiceName);
            conf.set("dfs.client.failover.proxy.provider." + hdfsServiceName,
                    "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
            conf.set("dfs.namenode.rpc-address." + hdfsServiceName + ".nn1",
                    connectivityDetail.getNameNodeUris().get(0));
            conf.set("dfs.namenode.rpc-address." + hdfsServiceName + ".nn0",
                    connectivityDetail.getNameNodeUris().get(1));
        }
        return conf;

    } 

hdfsService名称未正确设置到配置对象中,但我能够获取FileSystem并且一切正常。我不确定为什么它没有使用服务名称?

这就是我创建Path

的方式
public static Path getHdfsResourceLocation(String resourceLocation) throws Exception {
         String[] hdfsURIs = OrchestrationConfigUtil.getHdfsUri();
         Path hdfsResourceLoc = null;
         if (isHAEnabled()) {
             hdfsResourceLoc = new Path(resourceLocation);
         } else {
             hdfsResourceLoc = FileContext.getFileContext(new URI(hdfsURIs[0])).makeQualified(new Path(resourceLocation));
         }
         return hdfsResourceLoc;
     }

一切都运行正常,服务名称错误,我不知道为什么?

0 个答案:

没有答案