Hadoop / Yarn(v0.23.3)Psuedo-Distributed Mode设置::无作业节点

时间:2012-09-20 23:32:50

标签: hadoop mapreduce yarn mrv2

我只是在Psuedo-Distributed模式下设置Hadoop / Yarn 2.x(特别是v0.23.3)。

我按照几篇博客的说明进行了操作。网站,或多或少提供 设置相同的处方。我也跟着O'reilly的第3版 Hadoop书(具有讽刺意味的是最没用的)。

问题:

After running "start-dfs.sh" and then "start-yarn.sh", while all of the daemons
do start (as indicated by jps(1)), the Resource Manager web portal
(Here: http://localhost:8088/cluster/nodes) indicates 0 (zero) job-nodes in the
cluster. So while submitting the example/test Hadoop job indeed does get
scheduled, it pends forever because, I assume, the configuration doesn't see a
node to run it on.

Below are the steps I performed, including resultant configuration files.
Hopefully the community help me out... (And thank you in advance).

配置:

在my和hadoop的UNIX帐户配置文件中设置了以下环境变量:〜/ .profile:

export HADOOP_HOME=/home/myself/APPS.d/APACHE_HADOOP.d/latest
  # Note: /home/myself/APPS.d/APACHE_HADOOP.d/latest -> hadoop-0.23.3

export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_INSTALL=${HADOOP_HOME}
export HADOOP_CLASSPATH=${HADOOP_HOME}/lib
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop/conf
export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop/conf
export JAVA_HOME=/usr/lib/jvm/jre

hadoop $ java -version

java version "1.7.0_06-icedtea<br>
OpenJDK Runtime Environment (fedora-2.3.1.fc17.2-x86_64)<br>
OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode)<br>

# Although the above shows OpenJDK, the same problem happens with Sun's JRE/JDK.

NAMENODE&amp; DATANODE目录,也在etc / hadoop / conf / hdfs-site.xml中指定:

/home/myself/APPS.d/APACHE_HADOOP.d/latest/YARN_DATA.d/HDFS.d/DATANODE.d/
/home/myself/APPS.d/APACHE_HADOOP.d/latest/YARN_DATA.d/HDFS.d/NAMENODE.d/

接下来,各种XML配置文件(同样,YARN / MRv2 / v0.23.3):

hadoop$ pwd; ls -l
/home/myself/APPS.d/APACHE_HADOOP.d/latest/etc/hadoop/conf
lrwxrwxrwx 1 hadoop hadoop   16 Sep 20 13:14 core-site.xml -> ../core-site.xml
lrwxrwxrwx 1 hadoop hadoop   16 Sep 20 13:14 hdfs-site.xml -> ../hdfs-site.xml
lrwxrwxrwx 1 hadoop hadoop   18 Sep 20 13:14 httpfs-site.xml -> ../httpfs-site.xml
lrwxrwxrwx 1 hadoop hadoop   18 Sep 20 13:14 mapred-site.xml -> ../mapred-site.xml
-rw-rw-r-- 1 hadoop hadoop   10 Sep 20 15:36 slaves
lrwxrwxrwx 1 hadoop hadoop   16 Sep 20 13:14 yarn-site.xml -> ../yarn-site.xml

芯-site.xml中

<?xml version="1.0"?>
<!-- core-site.xml -->
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost/</value>
  </property>
</configuration>

mapred-site.xml中

<?xml version="1.0"?>
<!-- mapred-site.xml -->
<configuration>

  <!-- Same problem whether this (legacy) stanza is included or not.  -->
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
  </property>

  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>

HDFS-site.xml中

<!-- hdfs-site.xml -->
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/home/myself/APPS.d/APACHE_HADOOP.d/YARN_DATA.d/HDFS.d/NAMENODE.d</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/home/myself/APPS.d/APACHE_HADOOP.d/YARN_DATA.d/HDFS.d/DATANODE.d</value>
  </property>
</configuration>

纱-site.xml中

<?xml version="1.0"?>
<!-- yarn-site.xml -->
<configuration>
  <property>
    <name>yarn.resourcemanager.address</name>
    <value>localhost:8032</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce.shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>4096</value>
  </property>
  <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>/home/myself/APPS.d/APACHE_HADOOP.d/YARN_DATA.d/TEMP.d</value>
  </property>
</configuration>

等/ hadoop的/ CONF /保存

localhost
   # Community/friends, is this entry correct/needed for my psuedo-dist mode?

其他总结说明:

(1) As you may have gleaned from above, all files/directories are owned
    by the 'hadoop' UNIX user. There is a hadoop:hadoop, UNIX User and
    Group, respectively.

(2) The following command was run after the NAMENODE & DATANODE directories
    (listed above) were created (and whose paths were entered into
    hdfs-site.xml):

    hadoop$ hadoop namenode -format

(3) Next, I ran "start-dfs.sh", then "start-yarn.sh".
    Here is jps(1) output:

hadoop@e6510$ jps
    21979 DataNode
    22253 ResourceManager
    22384 NodeManager
    22156 SecondaryNameNode
    21829 NameNode
    22742 Jps

谢谢!

2 个答案:

答案 0 :(得分:0)

经过多年努力解决这个问题没有成功(并相信我,我尝试了所有),我制定了 hadoop使用不同的解决方案。上面我下载了一个gzip / tar球 其中一个下载镜像的hadoop发行版(同样是v0.23.3) 我使用了我安装的RPM软件包的Caldera CDH发行版 他们的YUM回购。希望这对某人有所帮助,这里有详细的步骤。

步骤1:

对于Hadoop 0.20.x(MapReduce版本1):

  # rpm -Uvh http://archive.cloudera.com/redhat/6/x86_64/cdh/cdh3-repository-1.0-1.noarch.rpm
  # rpm --import http://archive.cloudera.com/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera
  # yum install hadoop-0.20-conf-pseudo

-OR -

对于Hadoop 0.23.x(MapReduce版本2):

  # rpm -Uvh http://archive.cloudera.com/cdh4/one-click-install/redhat/6/x86_64/cloudera-cdh-4-0.noarch.rpm
  # rpm --import http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera
  # yum install hadoop-conf-pseudo

在上面的两种情况下,安装“psuedo”包(代表“伪分布式” Hadoop“模式”,将单独方便地触发您需要的所有其他必要软件包的安装(通过依赖性解析)。

步骤2:

安装Sun / Oracle的Java JRE(如果您还没有这样做)。您可以 通过他们提供的RPM或gzip / tar球便携式安装它 版。只要您设置并导出“JAVA_HOME”并不重要 适当的环境,并确保$ {JAVA_HOME} / bin / java在您的路径中。

  # echo $JAVA_HOME; which java
  /home/myself/APPS.d/JAVA-JRE.d/jdk1.7.0_07
  /home/myself/APPS.d/JAVA-JRE.d/jdk1.7.0_07/bin/java

注意:我实际上创建了一个名为“latest”的符号链接,并将其指向/重新指向JAVA 每当我更新JAVA时版本特定的目录。我在上面明确表示 读者的理解。

步骤3:将hdfs格式化为“hdfs”Unix用户(在上面的“yum install”期间创建)。

  # sudo su hdfs -c "hadoop namenode -format"

步骤-4:

手动启动hadoop守护进程。

  for file in `ls /etc/init.d/hadoop*`
  do
  {
     ${file} start
  }
  done

步骤-5:

检查事情是否有效。以下是MapReduce v1 (在这种肤浅的层面上,MapReduce v2并没有那么多不同。)

  root# jps
   23104 DataNode
   23469 TaskTracker
   23361 SecondaryNameNode
   23187 JobTracker
   23267 NameNode
   24754 Jps

   # Do the next commands as yourself (not as "root").
   myself$ hadoop fs -mkdir /foo
   myself$ hadoop fs -rmr /foo
   myself$ hadoop jar /usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u5-examples.jar pi 2 100000

我希望这有帮助!

答案 1 :(得分:0)

诺尔,

我前几天跟踪了本教程http://www.thecloudavenue.com/search?q=0.23中的步骤,并设法建立了一个由3台6.3机器组成的小型集群