hadoop pig script - 与条件结合

时间:2012-11-15 22:50:15

标签: apache-pig hadoop-streaming

我是猪的新手。我想使用iid字段将两个文件A和B联合起来,但我不希望输出有任何A没有的iids(来自B)。这似乎很简单,但我无法弄清楚如何正确地做到这一点。

以下是仅包含联合的示例代码:

a = load '$input' as (iid:int, field:chararray, v1:chararray, v2:chararray, v3:chararray);
b = load '$data' as (iid:int, field:chararray, v1:chararray, v2:chararray, v3:chararray);
out = union onschema a,b;
singled = distinct out;
ordered = order singled by iid;
store ordered into '$output';

以下是仅有3列的示例数据,用于描述我的期望。请注意,字段实际上是制表符分隔的。

样本数据A:

1  Name   Tom Linkon
1  Title  Professor
2  Name   Whatever
2  Title  Worker

样本数据B:

1  City  New York
2  City  Columbus
3  City  Fake fake
4  City  Blah Bla

示例输出

1  Name   Tom Linkon
1  Title  Professor
1  City   New York
2  Name   Whatever
2  Title  Worker
2  City   Columbus

非常感谢你的帮助!

2 个答案:

答案 0 :(得分:3)

使用COGROUP来组织具有相同密钥的记录,但避免JOIN的不良交叉产品。然后FILTER是否带有b的记录的包是空的,分成两个关系,然后执行UNION

a = load '$input' as (iid:int, field:chararray, v1:chararray, v2:chararray, v3:chararray);
b = load '$data' as (iid:int, field:chararray, v1:chararray, v2:chararray, v3:chararray);
c = COGROUP a BY iid, b BY iid;
c_filt = FILTER c BY NOT IsEmpty(b);
a_new = FOREACH c_filt GENERATE group AS iid, FLATTEN(a);
b_new = FOREACH c_filt GENERATE group AS iid, FLATTEN(b);
out = UNION ONSCHEMA a_new, b_new;
singled = DISTINCT out;
STORE (ORDER singled BY iid) INTO '$output';

然而,我并不是这个解决方案的粉丝 - 对于这么简单的操作来说,这是太多的线条和新的关系。真正需要的是将两个袋子合二为一的方法。猪显然不提供这个(但如果确实如此,请回答this SO question)。但是,您可以编写一个简单的UDF来执行此操作:

public class MERGE extends EvalFunc<DataBag> {
    public DataBag exec(Tuple input) throws IOException {
        DataBag b = new DefaultDataBag();
        try {
            if (input != null)
                for (int i = 0; i < input.size(); i++)
                    b.addAll((DataBag) input.get(i));
        } catch (Exception e) { return null; }
        return b;
    }
}

有了这个UDF,解决方案就变成了:

a = load '$input' as (iid:int, field:chararray, v1:chararray, v2:chararray, v3:chararray);
b = load '$data' as (iid:int, field:chararray, v1:chararray, v2:chararray, v3:chararray);
c = FOREACH (COGROUP a BY iid, b BY iid) GENERATE group AS iid, MERGE(a,b) AS bag;
out = FOREACH c {
    uniq = DISTINCT bag;
    GENERATE iid, FLATTEN(bag);
};
STORE (ORDER out BY iid) INTO '$output';

这种方法的另一个优点是,如果您有多个输入,则不需要在FOREACH之后执行多个COGROUP。只需向MERGE添加更多参数:

c = FOREACH (COGROUP a BY iid, b BY iid, ..., z BY iid)
    GENERATE group AS iid, MERGE(a,b,...,z) AS bag;

答案 1 :(得分:2)

这一个应该解决你的问题:

f1 = LOAD '/user/hadoop/f1' USING PigStorage('\t') AS (id_f1:int, key_f1:chararray, value_f1:chararray);
f2 = LOAD '/user/hadoop/f2' USING PigStorage('\t') AS (id_f2:int, key_f2:chararray, value_f2:chararray);
f3 = JOIN f1 by id_f1 LEFT OUTER, f2 BY id_f2;
f4 = FOREACH f3 GENERATE id_f1, key_f1, value_f1;
f5 = FOREACH f3 GENERATE id_f2, key_f2, value_f2;
f6 = UNION f4, f5;
f7 = DISTINCT f6;
f8 = ORDER f7 BY $0;
DUMP f8;