我收到一条错误消息,与2次测试运行非常不同。
我验证了数据类型数据,但确切地说是双倍值,但是类型转换存在问题。
为什么会这样?请帮我修理
DROP TABLE XXSCM_SRC_SHIPMENTS;
CREATE TABLE IF NOT EXISTS XXSCM_SRC_SHIPMENTS(
INVENTORY_ITEM_ID DOUBLE
,ORDERED_ITEM STRING
,SHIP_FROM_ORG_ID DOUBLE
,QTR_START_DATE STRING
,QTR_END_DATE STRING
,SEQ DOUBLE
,EXTERNAL_SHIPMENTS DOUBLE
-- ,PREV_EXTERNAL_SHIPMENTS DOUBLE
,INTERNAL_SHIPMENTS DOUBLE
--,PREV_INTERNAL_SHIPMENTS DOUBLE
,AVG_SELL_PRICE DOUBLE)
--,PREV_AVG_SELL_PRICE DOUBLE)
COMMENT 'DIMENTION FOR THE SHIPMENTS LOCAL AND GLOBAL'
PARTITIONED BY (ORGANIZATION_CODE STRING, FISCAL_PERIOD STRING)
CLUSTERED BY (INVENTORY_ITEM_ID, ORDERED_ITEM, SHIP_FROM_ORG_ID, QTR_START_DATE, QTR_END_DATE, SEQ)
SORTED BY (INVENTORY_ITEM_ID ASC, ORDERED_ITEM ASC, SHIP_FROM_ORG_ID ASC, QTR_START_DATE ASC, QTR_END_DATE ASC, SEQ ASC)
INTO 256 BUCKETS
STORED AS ORC TBLPROPERTIES("orc.compress"="SNAPPY");
1) Error Fails
SELECT inventory_item_id,ordered_item,ship_from_org_id,qtr_start_date,qtr_end_date,seq,external_shipments FROM supply_chain_pcam.XXSCM_SRC_SHIPMENTS limit 100
Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row [Error getting row data with exception java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
2) Got the result successfully
hive -e "set hive.cli.print.header=true;select * from supply_chain_pcam.xxscm_src_shipments limit 100"
答案 0 :(得分:0)
SHIP_FROM_ORG_ID字段中存在问题。您需要使用正确的数据类型重新创建表XXSCM_SRC_SHIPMENTS。 Hive无法解析该字段。你几乎得到了答案 - 如果
select *
正在获取结果,然后尝试单个字段 - 这里它表示要加倍的强制转换异常,因此您只能使用双字段。