我正在尝试使用Beeline在Hive上创建一个表。数据作为镶木地板文件存储在HDFS中,它们具有以下模式:
{
"object_type":"test",
"heartbeat":1496755564224,
"events":[
{
"timestamp":1496755582985,
"hostname":"hostname1",
"instance":"instance1",
"metrics_array":[
{
"metric_name":"metric1_1",
"metric_value":"value1_1"
}
]
},
{
"timestamp":1496756626551,
"hostname":"hostname2",
"instance":"instance1",
"metrics_array":[
{
"metric_name":"metric2_1",
"metric_value":"value2_1"
}
]
}
]
}
用于创建表的我的hql脚本如下:
set hive.support.sql11.reserved.keywords=false;
CREATE DATABASE IF NOT EXISTS datalake;
DROP TABLE IF EXISTS datalake.test;
CREATE EXTERNAL TABLE IF NOT EXISTS datalake.test
(
object_type STRING,
heartbeat BIGINT,
events STRUCT <
metrics_array: STRUCT <
metric_name: STRING,
metric_value: STRING
>,
timestamp: BIGINT,
hostname: STRING,
instance: STRING
>
)
STORED AS PARQUET
LOCATION '/tmp/test/';
这是我在执行SELECT * FROM datalake.test时遇到的错误:
错误:java.io.IOException:org.apache.parquet.io.ParquetDecodingException:无法读取文件hdfs中的块-1中的值0:// tmp / test / part-r-00000-7e58b193-a08f -44b1-87fa-bb12b4053bdf.gz.parquet(state =,code = 0)
有什么想法吗?
谢谢!