我正在尝试在Raspberry Pi 3上运行单节点hadoop集群。我可以运行hdfs dfs -ls /
,所以我知道至少hdfs已启动,但是当我将示例wordcount作业作为烟雾运行时测试以查看群集是否正常工作,我收到以下错误:Container ... is running beyond virtual memory limits. Current usage: 33.8 MB of 255 MB physical memory used; 1.1 GB of 537.6 MB virtual memory used. Killing container.
我的配置如下:
HDFS-site.xml中:
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
mapred-site.xml中:
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>256</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx210m</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>256</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx210m</value>
</property>
<property>
<name>yarn.app.mapreduce.am.resource.mb</name>
<value>256</value>
</property>
纱-site.xml中:
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>4</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>768</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>128</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>768</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-vcores</name>
<value>1</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>4</value>
</property>
core-site.xml:
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/hdfs/tmp</value>
</property>
Raspberry Pi的CPU:4×ARM Cortex-A53,RAM:1GB
如果它有用,我的配置和设置与此博客文章中描述的完全相同:https://web.archive.org/web/20170221231927/http://www.becausewecangeek.com/building-a-raspberry-pi-hadoop-cluster-part-1/