我无法在hadoop fs -ls /命令中查看HDFS中的文件,我认为这是因为名称节点没有运行。我尝试格式化namenode以及更改核心站点中的端口.xml到不同的值。还没有我的JPS列出NameNode。
以下是文件: 1)芯的site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hduser/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:50000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri’s scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri’s authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
2)HDFS-site.sml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/hduser/hadoop-1.2.1/data</value>
</property>
</configuration>
3)mapred-site.xml中
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If “local”, then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>
JPS输出是:
21043 JobTracker
21147 TaskTracker
21789 Jps
20839 DataNode
20957 SecondaryNameNode
有人可以帮忙吗?
答案 0 :(得分:1)
如果再次发生相同的问题,然后通过 rm -rf tmp / 从tmp文件夹中删除所有内容,然后格式化namenode,则应该启动它。或者您也可以尝试恢复namenode bin / hdfs namenode -recover