使用范围的配置单元放置分区影响元存储

时间:2018-08-21 15:58:45

标签: hive partition metastore

我正在使用Hadoop 2.6.0-cdh5.14.2,Hive 1.1.0-cdh5.14.2 在此系统中,确实存在一个具有183K +分区的巨大表,它是一个外部表,并且命令:

0: jdbc:hive2://hiveserver2.hd.docomodigital.> drop table unifieddata_work.old__raw_ww_eventsjson

不起作用,Metastore在600秒内没有回复,并且任务以错误结束。

我尝试使用以下范围删除分区:

0: jdbc:hive2://hiveserver2.hd.docomodigital.> alter table unifieddata_work.old__raw_ww_eventsjson drop PARTITION (country='ae', year='2017', month='01', day>'29', hour > '00' );

INFO  : Compiling command(queryId=hive_20180821140909_ba6c4bb0-d0de-4fd3-a5ec-47e217289c6b): alter table unifieddata_work.old__raw_ww_eventsjson drop PARTITION (country='ae', year='2017', month='01', day>'29', hour > '00' )
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO  : Completed compiling command(queryId=hive_20180821140909_ba6c4bb0-d0de-4fd3-a5ec-47e217289c6b); Time taken: 0.612 seconds
INFO  : Executing command(queryId=hive_20180821140909_ba6c4bb0-d0de-4fd3-a5ec-47e217289c6b): alter table unifieddata_work.old__raw_ww_eventsjson drop PARTITION (country='ae', year='2017', month='01', day>'29', hour > '00' )
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Dropped the partition country=ae/year=2017/month=01/day=30/hour=01
INFO  : Dropped the partition country=ae/year=2017/month=01/day=30/hour=02
INFO  : Dropped the partition country=ae/year=2017/month=01/day=30/hour=03
INFO  : Dropped the partition country=ae/year=2017/month=01/day=30/hour=04
INFO  : Dropped the partition country=ae/year=2017/month=01/day=30/hour=05
INFO  : Dropped the partition country=ae/year=2017/month=01/day=30/hour=06
INFO  : Dropped the partition country=ae/year=2017/month=01/day=30/hour=07
INFO  : Dropped the partition country=ae/year=2017/month=01/day=30/hour=08
INFO  : Dropped the partition country=ae/year=2017/month=01/day=30/hour=09
... CUTTED HERE ...

它可以工作,但是Metastore发生了一些不好的事情:金丝雀停止工作。关于如何解决此问题的任何想法?有没有其他方法可以删除这么大的表?

0 个答案:

没有答案