Hadoop - 启动YARN服务时Java Runtime Environment的内存不足

时间:2016-09-29 13:10:37

标签: java linux hadoop jvm yarn

我已经根据教程http://pingax.com/install-apache-hadoop-ubuntu-cluster-setup设置了一个集群(1-master& 2-slaves(slave1,slave2))。当我第一次跑HDFS& YARN服务运行没有任何问题。但当我停止再次运行它们时,我从主服务器运行YARN服务(start-yarn.sh)时得到以下内容。

# starting yarn daemons
# starting resourcemanager, logging to /local/hadoop/logs/yarn-dev-resourcemanager-login200.out
# 
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 168 bytes for AllocateHeap
# An error report file with more information is saved as: /local/hadoop/hs_err_pid21428.log

Compiler replay data is saved as: /local/hadoop/replay_pid21428.log
slave1: starting nodemanager, logging to /local/hadoop/logs/yarn-dev-nodemanager-login198.out
slave2: starting nodemanager, logging to /local/hadoop/logs/yarn-dev-nodemanager-login199.out
slave2: #
slave2: # There is insufficient memory for the Java Runtime Environment to continue.
slave2: # Native memory allocation (malloc) failed to allocate 168 bytes for AllocateHeap
slave2: # An error report file with more information is saved as:
slave2: # /local/hadoop/hs_err_pid27199.log
slave2: #
slave2: # Compiler replay data is saved as:
slave2: # /local/hadoop/replay_pid27199.log

根据out of Memory Error in Hadoop"Java Heap space Out Of Memory Error" while running a mapreduce program的建议,我将heap memory大小限制更改为256,512,1024&所有3个文件~/.bashrchadoop-env.shmapred-site.sh中的2048,但没有任何效果。

注意:我不是Linux和JVM的专家。

记录其中一个节点的文件内容:

# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 32784 bytes for Chunk::new
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (allocation.cpp:390), pid=16375, tid=0x00007f39a352c700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_102-b14) (build 1.8.0_102-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.102-b14 mixed mode linux-amd64 compressed oops)
# Core dump written. Default location: /local/hadoop/core or core.16375 (max size 1 kB). To ensure a full core dump, try "ulimit -c unlimited" before starting Java again

CPU:total 1 (1 cores per cpu, 1 threads per core) family 6 model 45 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, avx, aes, clmul, tsc, tscinvbit, tscinv

Memory: 4k page, physical 2051532k(254660k free), swap 1051644k(1051324k free)

1 个答案:

答案 0 :(得分:1)

从你的帖子中不清楚VM本身有多少内存,但似乎VM只有2GB的物理内存和1GB的交换空间。如果是这种情况,您将真正增加VM的内存。绝对不低于4GB的物理内存,或者你很幸运能够让Hadoop堆栈运行并同时让操作系统保持高兴。理想情况下,将每个VM设置为大约8GB的RAM,以确保您有几GB的RAM可以用于MapReduce作业。