我一直在尝试设置hadoop的CDH4安装。我有12台机器,标记为hadoop01 - hadoop12,并且namenode,作业跟踪器和所有数据节点都已启动。我能够查看dfshealth.jsp并看到它找到了所有数据节点。
但是,每当我尝试启动辅助名称节点时,它都会出现异常:
Starting Hadoop secondarynamenode: [ OK ]
starting secondarynamenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-secondarynamenode-hadoop02.dev.terapeak.com.out
Exception in thread "main" java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:324)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:312)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:305)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:222)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:186)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:578)
这是我在辅助名称节点上的hdfs-site.xml文件:
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/data/1/dfs/nn</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>10.100.20.168:50070</value>
<description>
The address and the base port on which the dfs NameNode Web UI will listen.
If the port is 0, the server will start on a free port.
</description>
</property>
<property>
<name>dfs.namenode.checkpoint.check.period</name>
<value>3600</value>
</property>
<property>
<name>dfs.namenode.checkpoint.txns</name>
<value>40000</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>/var/lib/hadoop-hdfs/cache</value>
</property>
<property>
<name>dfs.namenode.checkpoint.edits.dir</name>
<value>/var/lib/hadoop-hdfs/cache</value>
</property>
<property>
<name>dfs.namenode.num.checkpoints.retained</name>
<value>1</value>
</property>
<property>
<name>mapreduce.jobtracker.restart.recover</name>
<value>true</value>
</property>
</configuration>
dfs.namenode.http-address的值似乎有问题,但我不确定是什么。它应该以http://或hdfs://开头吗?我尝试在lynx中调用10.100.20.168:50070并显示一个页面。有什么想法吗?
答案 0 :(得分:7)
看起来我在辅助名称节点上缺少core-site.xml配置。添加了该过程并正确启动。
核心-site.xml中:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://10.100.20.168/</value>
</property>
</configuration>
答案 1 :(得分:1)
如果您正在运行单个节点群集,请确保已按照链接中的说明正确设置了HADOOP_PREFIX变量:http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
即使我遇到与您相同的问题,也可以通过设置此变量来解决问题