在Hive表中使用JSON-SerDe

时间:2013-02-05 10:57:03

标签: hadoop hive

我正在从下面的链接尝试JSON-SerDe http://code.google.com/p/hive-json-serde/wiki/GettingStarted

         CREATE TABLE my_table (field1 string, field2 int, 
                                     field3 string, field4 double)
         ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.JsonSerde' ;

我已将Json-SerDe jar添加为

          ADD JAR /path-to/hive-json-serde.jar;

将数据加载为

LOAD DATA LOCAL INPATH  '/home/hduser/pradi/Test.json' INTO TABLE my_table;

并成功加载数据。

但是当查询数据为

从my_table中选择* ;

我只从表中获得一行

data1 100 more data1 123.001

Test.json包含

{"field1":"data1","field2":100,"field3":"more data1","field4":123.001} 

{"field1":"data2","field2":200,"field3":"more data2","field4":123.002} 

{"field1":"data3","field2":300,"field3":"more data3","field4":123.003} 

{"field1":"data4","field2":400,"field3":"more data4","field4":123.004}

问题出在哪里?当我查询表时,为什么只有一行而不是4行。并且在 / user / hive / warehouse / my_table 中包含所有4行!!


hive> add jar /home/hduser/pradeep/hive-json-serde-0.2.jar;
Added /home/hduser/pradeep/hive-json-serde-0.2.jar to class path
Added resource: /home/hduser/pradeep/hive-json-serde-0.2.jar

hive> CREATE EXTERNAL TABLE my_table (field1 string, field2 int,
>                                 field3 string, field4 double)
> ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.JsonSerde'
> WITH SERDEPROPERTIES (
>   "field1"="$.field1",
>   "field2"="$.field2",
>   "field3"="$.field3",
>   "field4"="$.field4"
> );
OK
Time taken: 0.088 seconds

hive> LOAD DATA LOCAL INPATH  '/home/hduser/pradi/test.json' INTO TABLE my_table;
Copying data from file:/home/hduser/pradi/test.json
Copying file: file:/home/hduser/pradi/test.json
Loading data to table default.my_table
OK
Time taken: 0.426 seconds

hive> select * from my_table;
OK
data1   100     more data1      123.001
Time taken: 0.17 seconds

我已经发布了test.json文件的内容。所以你可以看到查询只产生一行

data1   100     more data1      123.001

我已将json文件更改为包含

的employee.json

{       “firstName”:“迈克”,       “lastName”:“Chepesky”,       “employeeNumber”:1840192 }

并更改了表格,但在查询表格时显示空值

hive> add jar /home/hduser/pradi/hive-json-serde-0.2.jar;
Added /home/hduser/pradi/hive-json-serde-0.2.jar to class path
Added resource: /home/hduser/pradi/hive-json-serde-0.2.jar

hive> create EXTERNAL table employees_json (firstName string, lastName string,        employeeNumber int )
> ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.JsonSerde';
OK
Time taken: 0.297 seconds


hive> load data local inpath '/home/hduser/pradi/employees.json' into table     employees_json;
Copying data from file:/home/hduser/pradi/employees.json
Copying file: file:/home/hduser/pradi/employees.json
Loading data to table default.employees_json
OK
Time taken: 0.293 seconds


 hive>select * from employees_json;
  OK
  NULL    NULL    NULL
  NULL    NULL    NULL
  NULL    NULL    NULL
  NULL    NULL    NULL
  NULL    NULL    NULL
  NULL    NULL    NULL
Time taken: 0.194 seconds

4 个答案:

答案 0 :(得分:1)

如果没有日志(请参阅Getting Started),如果有疑问,有点难以说明发生了什么。只是一个简单的想法 - 你能尝试使用WITH SERDEPROPERTIES如此:

CREATE EXTERNAL TABLE my_table (field1 string, field2 int, 
                                field3 string, field4 double)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.JsonSerde'
WITH SERDEPROPERTIES (
  "field1"="$.field1",
  "field2"="$.field2",
  "field3"="$.field3",
  "field4"="$.field4" 
);

还有一个fork你可能想试试ThinkBigAnalytics。

更新:关闭Test.json中的输入是无效的JSON,因此记录会崩溃。

有关详细信息,请参阅答案https://stackoverflow.com/a/11707993/396567

答案 1 :(得分:0)

  1. 首先,您必须在http://jsonlint.com/上验证您的json文件 之后,将您的文件作为每行一行并删除[]。该行末尾的逗号是强制性的。

    [{" field1":" data1"," field2":100," field3":"更多数据1&# 34;," field4中":123.001} {" field1":" data2"," field2":200," field3":"更多数据2",& #34; field4中":123.002} {" field1":" data3"," field2":300," field3":"更多数据3",& #34; field4中":123.003} {" field1":" data4"," field2":400," field3":"更多数据4",& #34; field4中":123.004}]

  2. 在我的测试中,我从hadoop集群添加了hive-json-serde-0.2.jar,我认为hive-json-serde-0.1.jar应该没问题。

    ADD JAR hive-json-serde-0.2.jar;

  3. 创建表格

    CREATE TABLE my_table(field1 string,field2 int,field3 string,field4 double) 行格式SERDE' org.apache.hadoop.hive.contrib.serde2.JsonSerde'

  4. 加载你的Json数据文件,在这里我从hadoop集群加载而不是从本地加载

    LOAD DATA INPATH' Test2.json' INTO TABLE my_table;

  5. My test

答案 2 :(得分:0)

用于json解析 基于cwiki / confluence我们需要遵循一些步骤

  1. 需要下载hive-hcatalog-core.jar

  2. 蜂房>添加jar /path/hive-hcatalog-core.jar

  3. create table tablename(colname1 datatype,.....)row formatserde'org.apache.hive.hcatalog.data.JsonSerDe'存储为ORCFILE;

  4. 创建表中的colname和test.json中的colname必须相同,否则它将显示空值 希望它会有所帮助

答案 3 :(得分:0)

我解决了类似的问题-

  1. 我从-取下了罐子 [http://www.congiu.net/hive-json-serde/1.3.8/hdp23/json-serde-1.3.8-jar-with-dependencies.jar]

  2. 在Hive CLI中运行命令-添加jar / path / to / jar

  3. 使用-
  4. 创建的表
create table messages (
    id int,
    creation_date string,
    text string,
    loggedInUser STRUCT<id:INT, name: STRING>
)
row format serde "org.openx.data.jsonserde.JsonSerDe";
  1. 这是我的JSON数据-
{"id": 1,"creation_date": "2020-03-01","text": "I am on cotroller","loggedInUser":{"id":1,"name":"API"}}
{"id": 2,"creation_date": "2020-04-01","text": "I am on service","loggedInUser":{"id":1,"name":"API"}}
  1. 使用-
  2. 将数据加载到表中
LOAD DATA LOCAL INPATH '${env:HOME}/path-to-json'
OVERWRITE INTO TABLE messages;
  1. select * from messages;
1   2020-03-01    I am on cotroller   {"id":1,"name:"API"}
2   2020-04-01    I am on service     {"id":1,"name:"API"}