此任务的诊断消息:Container [pid = 3347,containerID = container_1490354262227_0013_01_000104]正在超出物理内存限制。当前用法:使用1.0 GB的1 GB物理内存;使用1.5 GB的5 GB虚拟内存。杀死容器。 container_1490354262227_0013_01_000104的进程树转储:| - PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)SYSTEM_TIME(MILLIS)VMEM_USAGE(BYTES)RSSMEM_USAGE(PAGES)FULL_CMD_LINE | - 3360 3347 3347 3347(java)7596 396 1537003520 262629 / usr / java / latest / bin / java -Djava.net.preferIPv4Stack = true -Dhadoop.metrics.log.level = WARN -Xmx864m -Djava.io.tmpdir = / mnt3 / var / lib / hadoop / tmp / nm-local-dir / usercache / hadoop / appcache / application_1490354262227_0013 / container_1490354262227_0013_01_000104 / tmp -Dlog4j.configuration = container-log4j.properties -Dyarn.app.container.log.dir = / mnt / var / log / hadoop / userlogs / application_1490354262227_0013 / container_1490354262227_0013_01_000104 -Dyarn.app .container.log.filesize = 0 -Dhadoop.root.logger = INFO,CLA org.apache.hadoop.mapred.YarnChild 10.35.178.86 49938 attempt_1490354262227_0013_m_000004_3 104 | - 3347 2563 3347 3347(bash)0 1 115806208 698 / bin / bash -c / usr / java / latest / bin / java -Djava.net.preferIPv4Stack = true -Dhadoop.metrics.log.level = WARN -Xmx864m -Djava.io.tmpdir = / mnt3 / var / lib / hadoop / tmp / nm-local-dir / usercache / hadoop / appcache / application_1490354262227_0013 / container_1490354262227_0013_01_000104 / tmp -Dlog4j.configuration = container-log4j.properties -Dyarn。 app.container.log.dir = / mnt / var / log / hadoop / userlogs / application_1490354262227_0013 / container_1490354262227_0013_01_000104 -Dyarn.app.container.log.filesize = 0 -Dhadoop.root.logger = INFO,CLA org.apache.hadoop。 mapred.YarnChild 10.35.178.86 49938 attempt_1490354262227_0013_m_000004_3 104 1 GT; / MNT /无功/日志/ hadoop的/ userlogs / application_1490354262227_0013 / container_1490354262227_0013_01_000104 /标准输出2 - ; / MNT /无功/日志/ hadoop的/ userlogs / application_1490354262227_0013 / container_1490354262227_0013_01_000104 / stderr的
答案 0 :(得分:1)
Container [pid = 3347,containerID = container_1490354262227_0013_01_000104]超出了物理内存限制。当前用法:使用1.0 GB的1 GB物理内存;使用1.5 GB的5 GB虚拟内存。
看起来您的流程需要更多内存,并且超出了定义的限制。
您需要增加容器大小</ p>
SET hive.tez.container.size=4096MB
SET hive.auto.convert.join.noconditionaltask.size=1370MB
详细了解此here。
答案 1 :(得分:0)
如果在reducer上失败:
插入覆盖表items_s3_table PARTITION(w_id)选择pk,cId, fcsku,cType,disposition,cReferenceId,snapshotId,quantity,w_id
分发
from items_dynamodb_table由w_id;
hive.exec.reducers.bytes.per.reducer=67108864;
对于地图制作者:
mapreduce.map.memory.mb=4096;
mapreduce.map.java.opts=-Xmx3000m;
对于减速器:
mapreduce.reduce.memory.mb=4096;
mapreduce.reduce.java.opts=-Xmx3000m;