在ubuntu上设置Hadoop YARN(单节点)

时间:2014-09-22 16:30:22

标签: java hadoop hdfs yarn hadoop2

我在Ubuntu 13上将Hadoop YARN(2.5.1)设置为单节点集群。当我运行start-dfs.sh时,它会提供以下输出,并且进程无法启动(我确认使用了jps和ps命令)。我的bashrc设置也在下面复制。关于我需要重新配置的任何想法?

bashrc添加

export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export HADOOP_INSTALL=/opt/hadoop/hadoop-2.5.1
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"

start-dfs.sh输出:

14/09/22 12:24:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /opt/hadoop/hadoop-2.5.1/logs/hadoop-hduser-namenode-zkserver1.fidelus.com.out
localhost: nice: $HADOOP_INSTALL/bin/hdfs: No such file or directory
localhost: starting datanode, logging to /opt/hadoop/hadoop-2.5.1/logs/hadoop-hduser-datanode-zkserver1.fidelus.com.out
localhost: nice: $HADOOP_INSTALL/bin/hdfs: No such file or directory
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is cf:e1:ea:86:a4:0c:cd:ec:9d:b9:bc:90:9d:2b:db:d5.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop/hadoop-2.5.1/logs/hadoop-hduser-secondarynamenode-zkserver1.fidelus.com.out
0.0.0.0: nice: $HADOOP_INSTALL/bin/hdfs: No such file or directory
14/09/22 12:24:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

bin目录有hdfs文件,其所有者是hduser(我正在以hduser的身份运行进程)。 我的$ HADOOP_INSTALL设置指向hadoop目录(/opt/hadoop/hadoop-2.5.1)。我是否应该使用权限,配置更改任何内容,或者只是将目录移出opt而不是将其移至/ usr / local?

更新: 当我运行start-yarn.sh时,我收到以下消息:

localhost: Error: Could not find or load main class org.apache.hadoop.yarn.server.nodemanager.NodeManager

更新 我将目录移动到/ usr / local但我得到了相同的警告消息。

更新 我按照jps命令运行ResourceManager。但是,当我尝试启动纱线时,它会因上面给出的错误而失败。我可以访问端口8088上的resourcemanager UI。有什么想法吗?

1 个答案:

答案 0 :(得分:0)

尝试使用以下命令运行namenode(而不是使用start-dfs.sh)并查看是否有效。

    hadoop-daemon.sh start namenode
    hadoop-daemon.sh start secondarynamenode
    hadoop-daemon.sh start datanode
    hadoop-daemon.sh start nodemanager
    mr-jobhistory-daemon.sh start historyserver