HADOOP YARN - 应用程序已添加到调度程序中,尚未激活。跳过AM分配为群集资源为空

时间:2018-03-30 17:59:59

标签: hadoop yarn

我正在评估YARN的项目。我试图让简单的分布式shell示例工作。我已经将应用程序提交给了SUBMITTED阶段,但它从未启动过。这是此行报告的信息:

ApplicationReport report = yarnClient.getApplicationReport(appId);

Application is added to the scheduler and is not yet activated. Skipping AM assignment as cluster resource is empty. Details : AM Partition = DEFAULT_PARTITION; AM Resource Request = memory:1024, vCores:1; Queue Resource Limit for AM = memory:0, vCores:0; User AM Resource Limit of the queue = memory:0, vCores:0; Queue AM Resource Usage = memory:128, vCores:1;

其他开发人员的解决方案似乎必须将yarn-site.xml文件中的yarn.scheduler.capacity.maximum-am-resource-percent从默认值 .1 增加。我尝试过 .2 .5 的值,但似乎没有帮助。

3 个答案:

答案 0 :(得分:4)

看起来你没有以正确的方式配置分配给Yarn的RAM。如果您尝试根据自己的安装推断/适应教程,这可能是.....中的一个引脚。 (另外,解析文档)。我强烈建议您使用的工具如下:

wget http://public-repo-1.hortonworks.com/HDP/tools/2.6.0.3/hdp_manual_install_rpm_helper_files-2.6.0.3.8.tar.gz
tar zxvf hdp_manual_install_rpm_helper_files-2.6.0.3.8.tar.gz
rm hdp_manual_install_rpm_helper_files-2.6.0.3.8.tar.gz
mv hdp_manual_install_rpm_helper_files-2.6.0.3.8/ hdp_conf_files
python hdp_conf_files/scripts/yarn-utils.py -c 4 -m 8 -d 1 false

-c每个节点拥有的核心数 -m每个节点的内存量(千兆) -d每个节点有多少磁盘 -bool" True"如果安装了HBase; "假"如果不是

这应该给你类似的东西:

Using cores=4 memory=8GB disks=1 hbase=True
Profile: cores=4 memory=5120MB reserved=3GB usableMem=5GB disks=1
Num Container=3
Container Ram=1536MB
Used Ram=4GB
Unused Ram=3GB
yarn.scheduler.minimum-allocation-mb=1536
yarn.scheduler.maximum-allocation-mb=4608
yarn.nodemanager.resource.memory-mb=4608
mapreduce.map.memory.mb=1536
mapreduce.map.java.opts=-Xmx1228m
mapreduce.reduce.memory.mb=3072
mapreduce.reduce.java.opts=-Xmx2457m
yarn.app.mapreduce.am.resource.mb=3072
yarn.app.mapreduce.am.command-opts=-Xmx2457m
mapreduce.task.io.sort.mb=614

相应地编辑yarn-site.xml mapred-site.xml。

 nano ~/hadoop/etc/hadoop/yarn-site.xml
 nano ~/hadoop/etc/hadoop/mapred-site.xml

此外,你应该在你的yarn-site.xml

中有这个
 <property>
        <name>yarn.acl.enable</name>
        <value>0</value>
</property>

<property>
        <name>yarn.resourcemanager.hostname</name>
        <value>name_of_your_master_node</value>
</property>

<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>

,这在mapred-site.xml中:

<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>

然后,使用scp将您的conf文件上传到每个节点(如果您上传了每个节点的ssh密钥)

for node in node1 node2 node3; do scp ~/hadoop/etc/hadoop/* $node:/home/hadoop/hadoop/etc/hadoop/; done

然后,重新开始纱线

stop-yarn.sh
start-yarn.sh

并检查您是否可以看到您的节点:

hadoop@master-node:~$ yarn node -list
18/06/01 12:51:33 INFO client.RMProxy: Connecting to ResourceManager at master-node/192.168.0.37:8032
Total Nodes:3
     Node-Id         Node-State Node-Http-Address   Number-of-Running-Containers
 node3:34683            RUNNING        node3:8042                              0
 node2:36467            RUNNING        node2:8042                              0
 node1:38317            RUNNING        node1:8042                              0

这可能会解决问题(祝你好运)(additional info

答案 1 :(得分:0)

将以下属性添加到yarn-site.xml并重新启动dfs和yarn

<property>
   <name>yarn.scheduler.capacity.root.support.user-limit-factor</name>  
   <value>2</value>
</property>
<property>
   <name>yarn.nodemanager.disk-health-checker.min-healthy-disks</name>
   <value>0.0</value>
</property>
<property>
   <name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
   <value>100.0</value>
</property>

答案 2 :(得分:0)

我遇到了同样的错误,并试图解决它。我意识到资源管理器没有资源来分配MapReduce应用程序的应用程序主机(AM)。
我在浏览器http://localhost:8088/cluster/nodes/unhealthy上导航并检查了不健康的节点(在我的情况下只有一个)->健康报告。我看到有关某些日志目录已满的警告。我清理了这些目录,然后节点运行正常,应用程序状态从 ACCEPTED 切换为 RUNNING 。实际上,默认情况下,如果节点磁盘的填充量超过%90,YARN的行为将是这样。无论如何,您必须清理空间并使可用空间低于%90。 我的确切健康报告是:

1/1 local-dirs usable space is below configured utilization percentage/no more usable space [ /tmp/hadoop-train/nm-local-dir : used space above threshold of 90.0% ] ; 
1/1 log-dirs usable space is below configured utilization percentage/no more usable space [ /opt/manual/hadoop/logs/userlogs : used space above threshold of 90.0% ]