我使用Cloudera Manager将CDH从5.0.0 beta升级到5.2.1。除hive Metastore外,所有服务都正常运行。我正在使用MySQL来进行蜂巢状的Metastore。
当我触发查询时,我无法从配置单元中删除表:
`drop table test;`
我收到以下错误。
hive> drop table test;
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDOException: Exception thrown when executing query
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:596)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:275)
at org.apache.hadoop.hive.metastore.ObjectStore.deleteTableColumnStatistics(ObjectStore.java:6030)
at org.apache.hadoop.hive.metastore.ObjectStore.dropTable(ObjectStore.java:813)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
at com.sun.proxy.$Proxy0.dropTable(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1386)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
at com.sun.proxy.$Proxy5.drop_table_with_environment_context(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:107)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
NestedThrowablesStackTrace:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column AO.BIG_DECIMAL_HIGH_VALUE in field list
参考链接
有人可以提出建议吗?
答案 0 :(得分:1)
我有完全相同的问题。我尝试做参考链接中建议的内容,但我仍然遇到了同样的问题(即使我修复了运行脚本时可以看到的所有错误)。
最后,我在Hive Metastore脚本中查找了BIG_DECIMAL_HIGH_VALUE,我在以下create语句中的每个hive-schema-0.1 * .0.mysql.sql中都看到了它:
CREATE TABLE IF NOT EXISTS `TAB_COL_STATS` (
`CS_ID` bigint(20) NOT NULL,
`DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`TBL_ID` bigint(20) NOT NULL,
`LONG_LOW_VALUE` bigint(20),
`LONG_HIGH_VALUE` bigint(20),
`DOUBLE_HIGH_VALUE` double(53,4),
`DOUBLE_LOW_VALUE` double(53,4),
`BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`NUM_NULLS` bigint(20) NOT NULL,
`NUM_DISTINCTS` bigint(20),
`AVG_COL_LEN` double(53,4),
`MAX_COL_LEN` bigint(20),
`NUM_TRUES` bigint(20),
`NUM_FALSES` bigint(20),
`LAST_ANALYZED` bigint(20) NOT NULL,
PRIMARY KEY (`CS_ID`),
CONSTRAINT `TAB_COL_STATS_FK` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE IF NOT EXISTS `PART_COL_STATS` (
`CS_ID` bigint(20) NOT NULL,
`DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`PART_ID` bigint(20) NOT NULL,
`LONG_LOW_VALUE` bigint(20),
`LONG_HIGH_VALUE` bigint(20),
`DOUBLE_HIGH_VALUE` double(53,4),
`DOUBLE_LOW_VALUE` double(53,4),
`BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`NUM_NULLS` bigint(20) NOT NULL,
`NUM_DISTINCTS` bigint(20),
`AVG_COL_LEN` double(53,4),
`MAX_COL_LEN` bigint(20),
`NUM_TRUES` bigint(20),
`NUM_FALSES` bigint(20),
`LAST_ANALYZED` bigint(20) NOT NULL,
PRIMARY KEY (`CS_ID`),
CONSTRAINT `PART_COL_STATS_FK` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITIONS` (`PART_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
当我在Hive Metastore数据库中检查这些表时,我得到了不同的东西:
mysql> desc TAB_COL_STATS;
+-------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+-------+
| CS_ID | bigint(20) | NO | PRI | NULL | |
| AVG_COL_LEN | double | YES | | NULL | |
| COLUMN_NAME | varchar(128) | NO | | NULL | |
| COLUMN_TYPE | varchar(128) | NO | | NULL | |
| DB_NAME | varchar(128) | NO | | NULL | |
| DOUBLE_HIGH_VALUE | double | YES | | NULL | |
| DOUBLE_LOW_VALUE | double | YES | | NULL | |
| LAST_ANALYZED | bigint(20) | NO | | NULL | |
| LONG_HIGH_VALUE | bigint(20) | YES | | NULL | |
| LONG_LOW_VALUE | bigint(20) | YES | | NULL | |
| MAX_COL_LEN | bigint(20) | YES | | NULL | |
| NUM_DISTINCTS | bigint(20) | YES | | NULL | |
| NUM_FALSES | bigint(20) | YES | | NULL | |
| NUM_NULLS | bigint(20) | NO | | NULL | |
| NUM_TRUES | bigint(20) | YES | | NULL | |
| TBL_ID | bigint(20) | YES | MUL | NULL | |
| TABLE_NAME | varchar(128) | NO | | NULL | |
+-------------------+--------------+------+-----+---------+-------+
17 rows in set (0.00 sec)
mysql> desc PART_COL_STATS;
+-------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+-------+
| CS_ID | bigint(20) | NO | PRI | NULL | |
| AVG_COL_LEN | double | YES | | NULL | |
| COLUMN_NAME | varchar(128) | NO | | NULL | |
| COLUMN_TYPE | varchar(128) | NO | | NULL | |
| DB_NAME | varchar(128) | NO | | NULL | |
| DOUBLE_HIGH_VALUE | double | YES | | NULL | |
| DOUBLE_LOW_VALUE | double | YES | | NULL | |
| LAST_ANALYZED | bigint(20) | NO | | NULL | |
| LONG_HIGH_VALUE | bigint(20) | YES | | NULL | |
| LONG_LOW_VALUE | bigint(20) | YES | | NULL | |
| MAX_COL_LEN | bigint(20) | YES | | NULL | |
| NUM_DISTINCTS | bigint(20) | YES | | NULL | |
| NUM_FALSES | bigint(20) | YES | | NULL | |
| NUM_NULLS | bigint(20) | NO | | NULL | |
| NUM_TRUES | bigint(20) | YES | | NULL | |
| PART_ID | bigint(20) | YES | MUL | NULL | |
| PARTITION_NAME | varchar(767) | NO | | NULL | |
| TABLE_NAME | varchar(128) | NO | | NULL | |
+-------------------+--------------+------+-----+---------+-------+
因为两个表都是空的,我只是删除并再次创建它们,并在hive-schema-0.13.0.mysql.sql文件中列出了create语句。
这样做之后,我可以再次删除Hive表。