我正在按操作执行一个组,其中一个reduce任务运行的时间更长。以下是示例代码段和问题说明
inp =load 'input' using PigStorage('|') AS(f1,f2,f3,f4,f5);
grp_inp = GROUP inp BY (f1,f2) parallel 300;
由于数据存在偏差,即一个键的值太多,因此一个减速器运行4个小时。休息所有减少任务在1分钟左右完成。
我可以做些什么来解决这个问题,任何替代方法?任何帮助将不胜感激。谢谢!
答案 0 :(得分:1)
您可能需要检查几件事: -
1>过滤掉f1和f2值均为NULL(如果有)的记录
2 - ;如果可能的话,尝试通过实现代数接口来使用hadoop组合器: -
https://www.safaribooksonline.com/library/view/programming-pig/9781449317881/ch10s02.html
3>使用自定义分区程序使用另一个密钥在reducer中分发数据。
以下是我在连接后用于对偏斜数据进行分区的示例代码(同样可以在分组后使用): -
import java.util.logging.Level;
import java.util.logging.Logger;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.Partitioner;
import org.apache.pig.backend.executionengine.ExecException;
import org.apache.pig.data.Tuple;
import org.apache.pig.impl.io.NullableTuple;
import org.apache.pig.impl.io.PigNullableWritable;
public class KeyPartitioner extends Partitioner<PigNullableWritable, Writable> {
/**
* Here key contains value of current key used for partitioning and Writable
* value conatins all fields from your tuple. I used my 5th field from tuple to do partitioning as I knew it has evenly distributed value.
**/
@Override
public int getPartition(PigNullableWritable key, Writable value, int numPartitions) {
Tuple valueTuple = (Tuple) ((NullableTuple) value).getValueAsPigType();
try {
if (valueTuple.size() > 5) {
Object hashObj = valueTuple.get(5);
Integer keyHash = Integer.parseInt(hashObj.toString());
int partitionNo = Math.abs(keyHash) % numPartitions;
return partitionNo;
} else {
if (valueTuple.size() > 0) {
return (Math.abs(valueTuple.get(1).hashCode())) % numPartitions;
}
}
} catch (NumberFormatException | ExecException ex) {
Logger.getLogger(KeyPartitioner.class.getName()).log(Level.SEVERE, null, ex);
}
return (Math.abs(key.hashCode())) % numPartitions;
}
}