我无法创建新文件或目录,也无法列出现有文件或目录
我正在使用以下命令进行操作,请你建议
hduser@c:/usr/local/hadoop$ jps
8546 ResourceManager
9181 Jps
1503 NameNode
8674 NodeManager
4398 DataNode
hduser@c:/usr/local/hadoop$ bin/hadoop fs -ls /
ls: Couldn't create proxy provider null
hduser@c:/usr/local/hadoop$ bin/hadoop fs -mkdir /books
mkdir: Couldn't create proxy provider null
hduser@c:/usr/local/hadoop$
下面是我的hdfs-site.xml
,我正在使用它。
<configuration>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.replicaion</name>
<value>2</value>
<description>to specifiy replication</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/h3iHA/name</value>
<final>true</final>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/h3iHA/data2</value>
<final>true</final>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>c:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>a:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>c:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>a:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>file:///mnt/filer</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.configuredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hduser/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence
shell(/bin/true)
</value>
</property>
</configuration>
核心文件,两个节点都相同
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
</configuration>
答案 0 :(得分:1)
为属性dfs.client.failover.proxy.provider.mycluster
设置的Java类名称不正确。它是 ConfiguredFailoverProxyProvider
而不是 configuredFailoverProxyProvider
。
在hdfs-site.xml
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>