我在Linux集群中安装了Hadoop。当我尝试通过命令启动服务器时 $ bin / start-all.sh,我收到以下错误:
mkdir: cannot create directory `/var/log/hadoop/spuri2': Permission denied
chown: cannot access `/var/log/hadoop/spuri2': No such file or directory
/home/spuri2/spring_2012/Hadoop/hadoop/hadoop-1.0.2/bin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-spuri2-namenode.pid: Permission denied
head: cannot open `/var/log/hadoop/spuri2/hadoop-spuri2-namenode-gpu02.cluster.out' for reading: No such file or directory
localhost: /home/spuri2/.bashrc: line 10: /act/Modules/3.2.6/init/bash: No such file or directory
localhost: mkdir: cannot create directory `/var/log/hadoop/spuri2': Permission denied
localhost: chown: cannot access `/var/log/hadoop/spuri2': No such file or directory
我已将conf / hadoop-env.sh中的日志目录参数配置到/ tmp目录,并且我已将core-site.xml中的“hadoop.tmp.dir”配置为/ tmp /目录。由于我无法访问/ var / log目录,但仍然有hadoop守护进程尝试写入/ var / log目录并失败。
我想知道为什么会这样?
答案 0 :(得分:1)
您必须在“core.site.xml”文件中写入此目录,而不是在hadoop-env.sh
中<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/Directory_hadoop_user_have_permission/temp/${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
答案 1 :(得分:0)
简而言之,我遇到了这个问题,因为大学集群中有多个hadoop安装。以root用户身份安装hadoop会破坏我当地的hadoop安装。
Hadoop-daemons无法启动的原因是因为它无法写入具有root权限的某些文件。我作为普通用户运行Hadoop。出现问题是因为我们大学的系统管理员已经以root用户身份安装了Hadoop,因此当我开始本地安装hadoop时,根安装配置文件优先于我的本地hadoop配置文件。花了很长时间来解决这个问题,但在以root用户身份卸载hadoop之后,问题得到了解决。
答案 2 :(得分:0)
我曾经得到同样的错误,如果你在配置标签下添加然后在运行更改为超级用户之前:su - username(这是拥有hadoop目录所有权的用户)然后尝试执行start-all。 SH
确保在教程中提到的配置标记之间添加了必要的内容:
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/