在Hive中加载Json数组文件

时间:2018-03-06 00:21:03

标签: json hive

我有一个包含以下数据的文件

[{"col1":"col1","col2":1}
,{"col1":"col11","col2":11}
,{"col1":"col111","col2":2}
]

我正在尝试在Hive中加载表。

我正在使用以下Hive serde

CREATE EXTERNAL TABLE my_table (
      my_array ARRAY<struct<col1:string,col2:int>>
)ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
WITH SERDEPROPERTIES ( "ignore.malformed.json" = "true")
LOCATION "MY_LOCATION";

我在运行create命令后尝试运行select *时遇到错误 -

['*org.apache.hive.service.cli.HiveSQLException:java.io.IOException: org.apache.hadoop.hive.serde2.SerDeException: java.io.IOException: Start token not found where expected:25:24', 'org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:499', 'org.apache.hive.service.cli.operation.OperationManager:getOperationNextRowSet:OperationManager.java:307', 'org.apache.hive.service.cli.session.HiveSessionImpl:fetchResults:HiveSessionImpl.java:878', 'sun.reflect.GeneratedMethodAccessor29:invoke::-1', 'sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43', 'java.lang.reflect.Method:invoke:Method.java:498', 'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78', 'org.apache.hive.service.cli.session.HiveSessionProxy:access$000:HiveSessionProxy.java:36', 'org.apache.hive.service.cli.session.HiveSessionProxy$1:run:HiveSessionProxy.java:63', 'java.security.AccessController:doPrivileged:AccessController.java:-2', 'javax.security.auth.Subject:doAs:Subject.java:422', 'org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1698', 'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59', 'com.sun.proxy.$Proxy35:fetchResults::-1', 'org.apache.hive.service.cli.CLIService:fetchResults:CLIService.java:559', 'org.apache.hive.service.cli.thrift.ThriftCLIService:FetchResults:ThriftCLIService.java:751', 'org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1717', 'org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1702', 'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39', 'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39', 'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56', 'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:286', 'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1149', 'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:624', 'java.lang.Thread:run:Thread.java:748', '*java.io.IOException:org.apache.hadoop.hive.serde2.SerDeException: java.io.IOException: Start token not found where expected:29:4', 'org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:521', 'org.apache.hadoop.hive.ql.exec.FetchOperator:pushRow:FetchOperator.java:428', 'org.apache.hadoop.hive.ql.exec.FetchTask:fetch:FetchTask.java:147', 'org.apache.hadoop.hive.ql.Driver:getResults:Driver.java:2207', 'org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:494', '*org.apache.hadoop.hive.serde2.SerDeException:java.io.IOException: Start token not found where expected:30:1', 'org.apache.hive.hcatalog.data.JsonSerDe:deserialize:JsonSerDe.java:184', 'org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:502', '*java.io.IOException:Start token not found where expected:30:0', 'org.apache.hive.hcatalog.data.JsonSerDe:deserialize:JsonSerDe.java:170'], statusCode=3), results=None, hasMoreRows=None)

我尝试了几件事,但都没有按预期工作。我无法更改输入数据格式,因为它是提供数据的其他人。

3 个答案:

答案 0 :(得分:1)

这是一个格式错误的JSON问题。 JSON文件将始终具有&#34;花括号&#34;在开始和结束。因此,将您的JSON文件更改为如下所示。

{"my_array":[{"col1":"col1","col2":1},{"col1":"col11","col2":11},{"col1":"col111","col2":2}]}

以与您已经完成相同的方式创建表格。

CREATE EXTERNAL TABLE my_table 
(
      my_array ARRAY<struct<col1:string,col2:int>>
)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
WITH SERDEPROPERTIES ( "ignore.malformed.json" = "true")
LOCATION "MY_LOCATION";

现在在新创建的表上触发select *以查看以下结果。

  

[{&#34; COL1&#34;:&#34; COL1&#34;&#34; COL2&#34;:1},{&#34; COL1&#34;:&#34; col11& #34;&#34; COL2&#34;:11},{&#34; COL1&#34;:&#34; col111&#34;&#34; COL2&#34;:2}]

使用select my_array.col1 from my_table;查看数组中col1的值。

  

[&#34; COL1&#34;&#34; col11&#34;&#34; col111&#34;]

PS - 不是存储数据的最有效方式。考虑转换数据并将其存储为ORC / Parquet。

希望有所帮助!

答案 1 :(得分:0)

看起来问题与您的json数据有关。你可以试试下面的例子吗?

使用以下内容创建员工json并将其放在hdfs中。

[root@quickstart spark]# hadoop fs -cat /user/cloudera/spark/employeejson/*
{"Name":"Vinayak","age":35}
{"Name":"Nilesh","age":37}
{"Name":"Raju","age":30}
{"Name":"Karthik","age":28}
{"Name":"Shreshta","age":1}
{"Name":"Siddhish","age":2}

添加以下jar(仅在出现任何错误时执行。)

hive> ADD JAR /usr/lib/hive-hcatalog/lib/hive-hcatalog-core.jar;

hive> 
CREATE TABLE employeefromjson(name string, age int)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
STORED AS TEXTFILE
LOCATION '/user/cloudera/hive/employeefromjson'
;

hive> LOAD DATA INPATH '/user/cloudera/spark/employeejson' OVERWRITE INTO TABLE employeefromjson;

hive> select * from employeefromjson;
OK
Vinayak 35
Nilesh  37
Raju    30
Karthik 28
Shreshta    1
Siddhish    2
Time taken: 0.174 seconds, Fetched: 6 row(s)

答案 2 :(得分:0)

JSON应始终以'{'开头,而不是'['。那就是问题所在。如您所知,JSON的结构为{'key':'value'}。您在文件中给出的值是没有任何键的值。因此,将您的JSON更改为以下formmat

{"my_array":[{"col1":"col1","col2":1},{"col1":"col11","col2":11},{"col1":"col111","col2":2}]}

您的Create table声明应该可以正常使用。

如果要获取所有行的每列的数据,请使用以下查询。

select my_array.col1, my_array.col2 from my_table;

以上命令将为您提供以下结果。

 OK
["col1","col11","col111"]       [1,11,2]

如果您希望单独为每一行获取结果列,请使用以下查询。

select a.* from my_table m lateral view outer inline (m.my_array) a;

以上命令将为您提供以下结果。

OK
col1    1
col11   11
col111  2

希望你这有帮助!