如何处理猪的溢油记忆

时间:2012-08-17 03:09:20

标签: hadoop apache-pig

我的代码是这样的:

pymt = LOAD 'pymt' USING PigStorage('|') AS ($pymt_schema);

pymt_grp = GROUP pymt BY key

results = FOREACH pymt_grp {

      /*
       *   some kind of logic, filter, count, distinct, sum, etc.
       */
}

但是现在我发现很多这样的日志:

org.apache.pig.impl.util.SpillableMemoryManager: Spilled an estimate of 207012796 bytes from 1 objects. init = 5439488(5312K) used = 424200488(414258K) committed = 559284224(546176K) max = 559284224(546176K)

其实我找到了原因,大多数原因是有一个“热门”键,有些东西比如key = 0作为ip地址,但我不想过滤这个键。有什么办法吗?我在我的UDF中实现了代数和累加器接口。

1 个答案:

答案 0 :(得分:6)

我遇到了严重偏斜数据或嵌套在FOREACH中的DISTINCT的类似问题(因为PIG会在内存中做不同的事情)。解决方案是将DISTINCT从FOREACH中取出作为示例,请参阅我对How to optimize a group by statement in PIG latin?

的回答

如果你不想在你的SUM和COUNT之前做DISTINCT而不是我建议使用2 GROUP BY。键列上的第一个组加上另一个列或随机数mod 100,它充当Salt(将单个键的数据传播到多个Reducers中)。比Key列上的第二个GROUP BY计算组1 COUNT或Sum的最终SUM。

例如:

inpt = load '/data.csv' using PigStorage(',') as (Key, Value);
view = foreach inpt generate Key, Value, ((int)(RANDOM() * 100)) as Salt;

group_1 = group view by (Key, Salt);
group_1_count = foreach group_1 generate group_1.Key as Key, COUNT(view) as count;

group_2 = group group_1_count by Key;
final_count = foreach group_2 generate flatten(group) as Key, SUM(group_1_count.count) as count;