我有一个数据集A
,它有时间戳,访问者,网址:
(2012-07-21T14:00:00.000Z, joe, hxxp:///www.aaa.com)
(2012-07-21T14:01:00.000Z, mary, hxxp://www.bbb.com)
(2012-07-21T14:02:00.000Z, joe, hxxp:///www.aaa.com)
我想在一个10分钟的时间窗口中测量每个用户每个用户的访问次数,但是作为一个滚动窗口,以分钟为单位递增。输出将是:
(2012-07-21T14:00 to 2012-07-21T14:10, joe, hxxp://www.aaa.com, 2)
(2012-07-21T14:01 to 2012-07-21T14:11, joe, hxxp://www.aaa.com, 1)
为了简化算术,我将时间戳更改为一天中的分钟,如下:
(840, joe, hxxp://www.aaa.com) /* 840 = 14:00 hrs x 60 + 00 mins) */
要通过移动时间窗口迭代'A',我在当天创建了一个分钟的数据集B:
(0)
(1)
(2)
.
.
.
.
(1440)
理想情况下,我想做类似的事情:
A = load 'dataset1' AS (ts, visitor, uri)
B = load 'dataset2' as (minute)
foreach B {
C = filter A by ts > minute AND ts < minute + 10;
D = GROUP C BY (visitor, uri);
foreach D GENERATE group, count(C) as mycnt;
}
DUMP B;
我知道“FOREACH”循环中不允许使用“GROUP”但是有一种解决方法可以达到相同的效果吗?
谢谢!
答案 0 :(得分:2)
也许你可以做这样的事情?
注意:这取决于您为整数日志创建的分钟数。如果不是那么你可以四舍五入到最近的分钟。
#!/usr/bin/python
@outputSchema('expanded: {(num:int)}')
def expand(start, end):
return [ (x) for x in range(start, end) ]
register 'myudf.py' using jython as myudf ;
-- A1 is the minutes. Schema:
-- A1: {minute: int}
-- A2 is the logs. Schema:
-- A2: {minute: int,name: chararray}
-- These schemas should change to fit your needs.
B = FOREACH A1 GENERATE minute,
FLATTEN(myudf.expand(minute, minute+10)) AS matchto ;
-- B is in the form:
-- 1 1
-- 1 2
-- ....
-- 2 2
-- 2 3
-- ....
-- 100 100
-- 100 101
-- etc.
-- Now we join on the minute in the second column of B with the
-- minute in the log, then it is just grouping by the minute in
-- the first column and name and counting
C = JOIN B BY matchto, A2 BY minute ;
D = FOREACH (GROUP C BY (B::minute, name))
GENERATE FLATTEN(group), COUNT(C) as count ;
我有点担心更大的日志的速度,但它应该工作。如果您需要我解释一下,请告诉我。
答案 1 :(得分:0)
A = load 'dataSet1' as (ts, visitor, uri);
houred = FOREACH A GENERATE user, org.apache.pig.tutorial.ExtractHour(time) as hour, uri;
hour_frequency1 = GROUP houred BY (hour, user);
这样的事情应该会有所帮助 ExtractHour是一个UDF,您可以为所需的持续时间创建类似的东西。 然后按小时分组,然后按用户分组 您可以使用GENERATE进行计数。