蜂巢加入选择非常慢

时间:2020-09-21 15:37:47

标签: sql hadoop hive hiveql

hi,我有两个表:user_info,ip_location,一个表是50,000,另一个表是100,000。 现在需要使用用户表的ip来检查属性,将ip转换为int并将间隔与ip_location比较。

我的蜂巢版本是3.0.0,此版本没有索引。

ip_location: enter image description here

此操作在pg中非常快:

set search_path=res;
select * from(
select ip,
(split_part(ip,'.',1)::bigint*256*256*256
+split_part(ip,'.',2)::bigint*256*256
+split_part(ip,'.',3)::bigint*256
+split_part(ip,'.',4)::bigint)::int8 as ipvalue
from user_info) t1
left join ip_location t2 on 
ipv4_val_begin=(select max(ipv4_val_begin) from ip_location where ipv4_val_begin <= ipvalue);

但是我没有在蜂巢中找到此语法的替代方法:

select ip,
t2.location_country
cast(split(ip,"\\.")[0] as bigint)*256*256*256
+cast(split(ip,"\\.")[0] as bigint)*256*256
+cast(split(ip,"\\.")[0] as bigint)*256
+cast(split(ip,"\\.")[0] as bigint) as ipvalue
from source.v_dm_vip_user t1
left join res.ip_location t2 on 
ipv4_val_begin=(select max(ipv4_val_begin) from res.ip_location where ipv4_val_begin <= ipvalue);

错误: enter image description here

更改为以下sql,您可以成功查询,但是它非常慢,需要1天时间:

select ip,
t2.location_country
cast(split(ip,"\\.")[0] as bigint)*256*256*256
+cast(split(ip,"\\.")[0] as bigint)*256*256
+cast(split(ip,"\\.")[0] as bigint)*256
+cast(split(ip,"\\.")[0] as bigint) as ipvalue
from source.v_dm_vip_user t1
left join res.ip_location t2 on 
cast(split(ip,"\\.")[0] as bigint)*256*256*256
+cast(split(ip,"\\.")[0] as bigint)*256*256
+cast(split(ip,"\\.")[0] as bigint)*256
+cast(split(ip,"\\.")[0] as bigint) > ipv4_val_begin
and 
cast(split(ip,"\\.")[0] as bigint)*256*256*256
+cast(split(ip,"\\.")[0] as bigint)*256*256
+cast(split(ip,"\\.")[0] as bigint)*256
+cast(split(ip,"\\.")[0] as bigint) < ipv4_val_end;

是否有更好更好的SQL?我做了很多尝试,但是没有用,谢谢。

1 个答案:

答案 0 :(得分:0)

我尝试了视图和行组索引,但是并不能加快速度。我想问一下如何使用配置单元加快IP地址范围,而配置单元在Spark上的速度也很慢。