所有节点都无法启动

时间:2018-04-15 05:10:04

标签: hadoop hdfs yarn

我已经设置了我的配置文件并格式化了我的文件系统,但每当我尝试执行启动shell脚本时,我都会收到此错误。

下面我把hstart的别名

错误:

computer:~ seanplowman$ hstart
18/04/14 23:34:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-namenode-Seans
localhost: Error: Could not find or load main class Mac.log

localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-datanode-Seans
localhost: Error: Could not find or load main class Mac.log

Starting secondary namenodes [0.0.0.0]
0.0.0.0: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 69: [: Mac.out: integer expression expected
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-seanplowman-secondarynamenode-Seans
0.0.0.0: Error: Could not find or load main class Mac.log

18/04/14 23:35:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
/usr/local/hadoop/sbin/yarn-daemon.sh: line 60: [: Mac.out: integer expression expected
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-seanplowman-resourcemanager-Seans
Error: Could not find or load main class Mac.log

localhost: /usr/local/hadoop/sbin/yarn-daemon.sh: line 60: [: Mac.out: integer expression expected
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-seanplowman-nodemanager-Seans
localhost: Error: Could not find or load main class Mac.log

jps还说运行启动脚本后没有任何节点启动。根据我的研究,似乎我的主机名可能有问题,但试图改变那些并没有修复任何东西。

我将提供我的其他配置文件,以显示它们如何为上下文设置。

/usr/local/hadoop/etc/hadoop/core-site.xml

<configuration>
 <property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
 </property>

 <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:9000</value>
  <description>The name of the default file system.  A URI whose scheme and 
  authority determine the FileSystem implementation.  The uri's scheme determines 
  the config property (fs.SCHEME.impl) naming the FileSystem implementation
  class.  The uri's authority is used to determine the host, port, etc. for a filesystem.
  </description>
 </property>
</configuration>

/usr/local/hadoop/etc/hadoop/hdfs-site.xml

<configuration>
 <property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
 </property>
</configuration>

/usr/local/hadoop/etc/hadoop/mapred-site.xml

<configuration>
 <property>
  <name>mapred.job.tracker</name>
  <value>localhost:9010</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
 </property>
</configuration>

我也对hadoop-env.sh做了一些修改。我会把那些放在下面。

/usr/local/hadoop/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home

export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc="

的.bashrc

#Hadoop variables
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib/native"
###end of paste

的.bash_profile

alias hstart="/usr/local/hadoop/sbin/start-dfs.sh;/usr/local/hadoop/sbin/start-yarn.sh"
alias hstop="/usr/local/hadoop/sbin/stop-yarn.sh;/usr/local/hadoop/sbin/stop-dfs.sh"

我不确定从这里开始的下一步是如何查看所涉及的每个文件。

1 个答案:

答案 0 :(得分:2)

我认为你的Mac主机名中有空格。例如,Seans Mac

默认日志文件使用

命名

HDFS:log=$HADOOP_LOG_DIR/hadoop-$HADOOP_IDENT_STRING-$command-$HOSTNAME.out
YARN:log=$YARN_LOG_DIR/yarn-$YARN_IDENT_STRING-$command-$HOSTNAME.out

$HOSTNAME是问题,空格是意料之外的。

如果查看输出,您会注意到hadoop-seanplowman-namenode-Seans,所以我怀疑

HADOOP_IDENT_STRING =运行脚本的用户= seanplowman
command = hadoop
HOSTNAME = Seans Mac

查看没有空格的fixing the hostname是否会发生任何变化。

如果没有,请编辑yarn-daemon.shhadoop-daemon.sh脚本以

开头
#!/usr/bin/env bash
set -xv

然后使用输出

编辑问题