如何使用配置单元实现预期的输出

时间:2014-01-29 05:07:00

标签: hadoop amazon-s3 hive

1.Table1和Table2是相关的,其中table1 px coo组合的时间条目显示在tabel2中。我需要最后一次输入每个px coo组合。如何使用配置单元实现这一点?预期输出显示如下以供参考。

px1    coo1
px1    coo2
px1    coo3
px2    coo2
px2    coo4
px3    coo3
px4    coo4

表2

id1     2014-01-01 21:23:23,273     px1    coo1
id2     2014-01-01 22:01:22,377     px1    coo1
id3     2014-01-01 22:25:06,196     px1    coo1
id4     2014-01-01 22:51:39,487     px1    coo1
id5     2014-01-01 02:05:57,875     px1    coo2
id6     2014-01-01 02:09:42,675     px1    coo2
id7     2014-01-01 23:19:42,059     px1    coo3
id8     2014-01-01 23:34:51,782     px1    coo3
id9     2014-01-01 06:13:05,531     px2    coo2
id10    2014-01-01 06:27:36,676     px2    coo2
id11    2014-01-01 06:59:43,999     px2    coo2
id12    2014-01-01 09:21:57,325     px3    coo3
id13    2014-01-01 17:19:06,956     px4    coo4
id14    2014-01-01 17:27:05,128     px4    coo4

预期输出应为

id4     2014-01-01 22:51:39,487     px1    coo1
id6     2014-01-01 02:09:42,675     px1    coo2
id8     2014-01-01 23:34:51,782     px1    coo3
id11    2014-01-01 06:59:43,999     px2    coo2
id12    2014-01-01 09:21:57,325     px3    coo3
id14    2014-01-01 17:27:05,128     px4    coo4

2 个答案:

答案 0 :(得分:2)

假设你的table2,最后一列将与table2一致。(我的意思是在这里作用于表2本身你可以得到结果为pix_id,coo_id将在table2中正确匹配。)如果我的假设是错误的plz借口。

hive (sflow)> desc table2;
OK
col_name    data_type   comment
id  string  from deserializer
time_stamp  string  from deserializer
pix_id  string  from deserializer
coo_id  string  from deserializer
Time taken: 0.277 seconds

hive(sflow)>

SELECT t2.id,t2.time_stamp,t2.pix_id,t2.coo_id
   FROM table2 t2 JOIN
        ( SELECT pix_id,coo_id, Max(UNIX_TIMESTAMP(time_stamp)) as max_epoch 
          FROM table2 
          GROUP BY pix_id,coo_id)  temp   
WHERE t2.pix_id=temp.pix_id AND t2.coo_id=temp.coo_id AND UNIX_TIMESTAMP(t2.time_stamp) = max_epoch ;

ps:这里通过复制完整的日志(请注意我正在运行伪模式hadoop,hive 0.9,2GB RAM):

hive (sflow)> from table2 t2 join (select pix_id,coo_id, Max(UNIX_TIMESTAMP(time_stamp)) as max_epoch from table2 group by pix_id,coo_id) temp
            > select t2.id,t2.time_stamp,t2.pix_id,t2.coo_id where t2.pix_id=temp.pix_id and t2.coo_id=temp.coo_id and UNIX_TIMESTAMP(t2.time_stamp) = max_epoch ;

Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
Total MapReduce CPU Time Spent: 24 seconds 0 msec
OK
id  time_stamp  pix_id  coo_id
id4 2014-01-01 22:51:39,487 px1 coo1
id6 2014-01-01 02:09:42,675 px1 coo2
id8 2014-01-01 23:34:51,782 px1 coo3
id11    2014-01-01 06:59:43,999 px2 coo2
id12    2014-01-01 09:21:57,325 px3 coo3
id14    2014-01-01 17:27:05,128 px4 coo4
Time taken: 145.17 seconds

hive (sflow)> 
hive (sflow)> desc table2;
OK
col_name    data_type   comment
id  string  from deserializer
time_stamp  string  from deserializer
pix_id  string  from deserializer
coo_id  string  from deserializer
Time taken: 0.277 seconds
hive (sflow)>

答案 1 :(得分:2)

您可以使用Brickhouse中的collect_max UDF(http://github.com/klout/brickhouse)生成只有一个作业步骤的数据。

select array_index( map_keys( max_map ), 0) as id,
    from_unixtime( array_index( map_values( max_map), 0) as time_stamp,
    pix_id,
    coo_id
from (
   select pix_id, coo_id, 
       collect_max( id, unix_timestamp(time_stamp) ) as max_map
   from table2
   group by pix_id, coo_id ) cm ;

对于小型数据集,它并不重要,但对于非常大的数据集,它只允许您通过一次数据传递来解决问题。