执行start-dfs.sh无法启动hdfs守护进程

时间:2016-04-28 06:15:18

标签: hadoop hdfs

我对关于core-site.xml和hdfs-site.xml的Hadoop配置感到非常困惑。我觉得start-dfs.sh脚本实际上并没有使用该设置。我使用hdfs user成功格式化Namenode,但执行start-dfs.sh无法启动hdfs守护进程。谁能帮我!这是错误消息:

[hdfs@I26C ~]$ start-dfs.sh 
Starting namenodes on [I26C]
I26C: mkdir: cannot create directory ‘/hdfs’: Permission denied
I26C: chown: cannot access ‘/hdfs/hdfs’: No such file or directory
I26C: starting namenode, logging to /hdfs/hdfs/hadoop-hdfs-namenode-I26C.out
I26C: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 159: /hdfs/hdfs/hadoop-hdfs-namenode-I26C.out: No such file or directory
I26C: head: cannot open ‘/hdfs/hdfs/hadoop-hdfs-namenode-I26C.out’ for reading: No such file or directory
I26C: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 177: /hdfs/hdfs/hadoop-hdfs-namenode-I26C.out: No such file or directory
I26C: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 178: /hdfs/hdfs/hadoop-hdfs-namenode-I26C.out: No such file or directory
10.1.226.15: mkdir: cannot create directory ‘/hdfs’: Permission denied
10.1.226.15: chown: cannot access ‘/hdfs/hdfs’: No such file or directory
10.1.226.15: starting datanode, logging to /hdfs/hdfs/hadoop-hdfs-datanode-I26C.out
10.1.226.15: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 159: /hdfs/hdfs/hadoop-hdfs-datanode-I26C.out: No such file or directory
10.1.226.16: mkdir: cannot create directory ‘/edw/hadoop-2.7.2/logs’: Permission denied
10.1.226.16: chown: cannot access ‘/edw/hadoop-2.7.2/logs’: No such file or directory
10.1.226.16: starting datanode, logging to /edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out
10.1.226.16: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 159: /edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out: No such file or directory
10.1.226.15: head: cannot open ‘/hdfs/hdfs/hadoop-hdfs-datanode-I26C.out’ for reading: No such file or directory
10.1.226.15: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 177: /hdfs/hdfs/hadoop-hdfs-datanode-I26C.out: No such file or directory
10.1.226.15: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 178: /hdfs/hdfs/hadoop-hdfs-datanode-I26C.out: No such file or directory
10.1.226.16: head: cannot open ‘/edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out’ for reading: No such file or directory
10.1.226.16: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 177: /edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out: No such file or directory
10.1.226.16: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 178: /edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out: No such file or directory
Starting secondary namenodes [0.0.0.0]
0.0.0.0: mkdir: cannot create directory ‘/hdfs’: Permission denied
0.0.0.0: chown: cannot access ‘/hdfs/hdfs’: No such file or directory
0.0.0.0: starting secondarynamenode, logging to /hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out
0.0.0.0: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 159: /hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out: No such file or directory
0.0.0.0: head: cannot open ‘/hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out’ for reading: No such file or directory
0.0.0.0: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 177: /hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out: No such file or directory
0.0.0.0: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 178: /hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out: No such file or directory

以下是有关我的部署的信息

主人:

 hostname: I26C
 IP:10.1.226.15

从站:

 hostname:I26D
 IP:10.1.226.16

Hadoop版本:2.7.2

操作系统:CentOS 7

JAVA:1.8

我创建了四个用户:

groupadd hadoop
useradd -g hadoop hadoop
useradd -g hadoop hdfs
useradd -g hadoop mapred
useradd -g hadoop yarn

HDFS namenode和datanode dir权限:

drwxrwxr-x. 3 hadoop hadoop 4.0K Apr 26 15:40 hadoop-data

core-site.xml设置:

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/edw/hadoop-data/</value>
    <description>Temporary Directory.</description>
  </property>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://10.1.226.15:54310</value>
  </property>
</configuration>

hdfs-site.xml设置:

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>2</value>
    <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
    </description>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///edw/hadoop-data/dfs/namenode</value>
    <description>Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
    </description>
  </property>
  <property>
    <name>dfs.blocksize</name>
    <value>67108864</value>
  </property>
  <property>
    <name>dfs.namenode.handler.count</name>
    <value>100</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///edw/hadoop-data/dfs/datanode</value>
    <description>Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.
    </description>
  </property>
</configuration>

2 个答案:

答案 0 :(得分:0)

hdfs用户没有hadoop文件夹的权限 可以说,您正在使用hdfs用户和hadoop组来运行hadoop设置。然后,您需要运行以下命令:

sudo chown -R hduser:hadoop <directory-name>  

为您登录的用户提供相应的读写执行权限。

答案 1 :(得分:0)

我已经解决了这个问题,谢谢你们。

/etc/profile中的hadoop日志配置 HADOOP_LOG_DIR 设置无法在hadoop-env.sh中使用。所以HADOOP_LOG_DIR默认为空,start-dfs.sh使用hadoop-env.sh的默认目录设置

export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER

我使用hdfs来预先将hADOOP_LOG_DIR的start-dfs.sh设置为/ hdfs,因此它没有创建目录的权限。

以下是我的新解决方案编辑$ {HADOOP_HOME} /etc/hadoop/hadoop-env.sh设置 HADOOP_LOG_DIR

HADOOP_LOG_DIR="/var/log/hadoop"
export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER