Hadoop YARN - 如何限制requestedMemory?

时间:2014-06-15 20:45:45

标签: hadoop mapreduce yarn

尝试从hadoop-mapreduce-examples-2.2.0.jar运行PI示例,我遇到以下异常:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException): Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=1536, maxMemory=512

不确定1536的来源,但512是我在mapred-site.xml中为子任务设置的最大堆大小:

<property>
  <name>mapreduce.map.memory.mb</name>
  <value>512</value>
</property>
<property>
  <name>mapreduce.map.java.opts</name>
  <value>-Xmx410m</value>
</property>
<property>
  <name>mapreduce.reduce.memory.mb</name>
  <value>512</value>
</property>
<property>
  <name>mapreduce.reduce.java.opts</name>
  <value>-Xmx410m</value>
</property>

确定map / reduce任务大小的正确方法是什么?

1 个答案:

答案 0 :(得分:5)

512是yarn.scheduler.maximum-allocation-mbyarn-site.xml的默认值,1536是yarn.app.mapreduce.am.resource.mbmapred-site.xml参数的默认值。

确保allocation-mb&gt; app.mapreduce.resouce,一切都会好的。