我正在尝试以伪分布式模式运行Hadoop。为此,我尝试按照本教程http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
我可以ssh到我的localhost并格式化文件系统。但是,我无法通过此命令启动NameNode守护程序和DataNode守护程序:
sbin/start-dfs.sh
当我用sudo执行它时,我得到:
ubuntu@ip-172-31-42-67:/usr/local/hadoop-2.6.0$ sudo sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: Permission denied (publickey).
localhost: Permission denied (publickey).
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Permission denied (publickey).
并且在没有sudo的情况下执行:
ubuntu@ip-172-31-42-67:/usr/local/hadoop-2.6.0$ sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.6.0/logs’: Permission denied
localhost: chown: cannot access ‘/usr/local/hadoop-2.6.0/logs’: No such file or directory
localhost: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out
localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out: No such file or directory
localhost: head: cannot open ‘/usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out’ for reading: No such file or directory
localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out: No such file or directory
localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out: No such file or directory
localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.6.0/logs’: Permission denied
localhost: chown: cannot access ‘/usr/local/hadoop-2.6.0/logs’: No such file or directory
localhost: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out
localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out: No such file or directory
localhost: head: cannot open ‘/usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out’ for reading: No such file or directory
localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out: No such file or directory
localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out: No such file or directory
Starting secondary namenodes [0.0.0.0]
0.0.0.0: mkdir: cannot create directory ‘/usr/local/hadoop-2.6.0/logs’: Permission denied
0.0.0.0: chown: cannot access ‘/usr/local/hadoop-2.6.0/logs’: No such file or directory
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out
0.0.0.0: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out: No such file or directory
0.0.0.0: head: cannot open ‘/usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out’ for reading: No such file or directory
0.0.0.0: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out: No such file or directory
0.0.0.0: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out: No such file or directory
我现在还注意到,当执行ls检查hfs目录的内容时,它会失败:
ubuntu@ip-172-31-42-67:~/dir$ hdfs dfs -ls output/
ls: Call From ip-172-31-42-67.us-west-2.compute.internal/172.31.42.67 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
谁能告诉我可能是什么问题?
答案 0 :(得分:0)
上述错误表明存在权限问题。 您必须确保hadoop用户具有/ usr / local / hadoop的适当权限。 为此,您可以尝试:
sudo chown -R hadoop /usr/local/hadoop/
或者
sudo chmod 777 /usr/local/hadoop/
答案 1 :(得分:0)
请确保您正确执行以下“配置”,您需要编辑4个“.xml”文件:
编辑文件hadoop-2.6.0 / etc / hadoop / core-site.xml,之间,放入:
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
编辑文件hadoop-2.6.0 / etc / hadoop / hdfs-site.xml,放入之间:
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
编辑文件hadoop-2.6.0 / etc / hadoop / mapred-site.xm,粘贴以下内容并保存
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
编辑文件hadoop-2.6.0 / etc / hadoop / yarn-site.xml,粘贴以下内容并保存
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
答案 2 :(得分:0)
我有同样的问题,我找到的唯一解决方案是:
建议您生成新的ssh-rsa密钥