大家好我想解决这个问题,我想知道是否有一个属性如此的文件:
(id#123, event#sasa, value#abcde, time#213, userid#21321)
获取总数据:
data_count = foreach (group data all) generate count(data);
以获得总用户:
group_users = GROUP data BY userid;
grp_all = GROUP group_users ALL;
count_users = FOREACH grp_all GENERATE COUNT(group_users);
现在我想知道如何将这些合并到1个输出
的文件中(id, event, value, time, total data,total users)
非常感谢。
答案 0 :(得分:1)
不确定什么是总数据,但是如果要返回具有总用户数的原始行,则需要多次使用FLATTEN。 PIG不是SQL,它适用于BAG,FLATEN将BAG转换回行。例如:
data = load './data.csv' using PigStorage(',') as (e_id, e_name,value,time,userid);
group_users = GROUP data BY userid;
grp_all = GROUP group_users ALL;
DESCRIBE grp_all;
-- grp_all: {group: chararray,group_users: {(group: bytearray,data: {(e_id: bytearray,e_name: bytearray,value: bytearray,time: bytearray,userid: bytearray)})}}
uniq_users = FOREACH grp_all GENERATE FLATTEN(group_users), COUNT(group_users) as total_users;
describe uniq_users;
-- uniq_users: {group_users::group: bytearray,group_users::data: {(e_id: bytearray,e_name: bytearray,value: bytearray,time: bytearray,userid: bytearray)},total_users: long}
original = FOREACH uniq_users GENERATE FLATTEN(data), total_users;
describe original;
-- original: {group_users::data::e_id: bytearray,group_users::data::e_name: bytearray,group_users::data::value: bytearray,group_users::data::time: bytearray,group_users::data::userid: bytearray,total_users: long}
DUMP original;
答案 1 :(得分:0)
我是通过使用这个脚本来完成的:
d1 = LOAD 'data' USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad') AS (json:map[]);
d2 = foreach d1 generate
json#'event' AS EVENT,
json#'params'#'uid' AS USER,
ToDate(((long)json#'ts')*1000) AS DATE;
grpd = group d2 by EVENT;
uniq2 = foreach grpd {
usr = d2.USER;
unq_usr = distinct usr;
generate group,
d2.DATE,
COUNT(d2.EVENT),
COUNT(unq_usr);
};