你如何用JSON数据制作一个HIVE表?

时间:2012-07-13 22:37:13

标签: json hadoop hive amazon-emr emr

我想用一些JSON数据(嵌套)创建一个Hive表并在其上运行查询?这甚至可能吗?

我已经将JSON文件上传到S3并启动了一个EMR实例,但是我不知道在hive控制台中键入什么来将JSON文件作为Hive表?

有没有人有一些示例命令让我入门,我找不到任何与Google有用的内容...

7 个答案:

答案 0 :(得分:31)

实际上没有必要使用JSON SerDe。这里有一篇很棒的博客文章(我不以任何方式与作者联系):

http://pkghosh.wordpress.com/2012/05/06/hive-plays-well-with-json/

其中概述了使用内置函数json_tuple在查询时解析json的策略(不是在表定义时):

https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-json_tuple

基本上,您的表模式只是将每一行加载为单个“字符串”列,然后根据需要在每个查询的基础上提取相关的json字段。例如来自该博客帖子的这个查询:

SELECT b.blogID, c.email FROM comments a LATERAL VIEW json_tuple(a.value, 'blogID', 'contact') b 
AS blogID, contact  LATERAL VIEW json_tuple(b.contact, 'email', 'website') c 
AS email, website WHERE b.blogID='64FY4D0B28';

在我的谦逊经历中,这已被证明更可靠(我遇到了处理JSON serdes的各种神秘问题,特别是对于嵌套对象)。

答案 1 :(得分:23)

您需要使用JSON serde才能让Hive将您的JSON映射到表格中的列。

一个很好的例子,告诉你如何:

http://aws.amazon.com/articles/2855

不幸的是,提供的JSON serde不能很好地处理嵌套的JSON,所以你可能需要展平你的JSON才能使用它。

以下是文章中正确语法的示例:

create external table impressions (
    requestBeginTime string, requestEndTime string, hostname string
  )
  partitioned by (
    dt string
  )
  row format 
    serde 'com.amazon.elasticmapreduce.JsonSerde'
    with serdeproperties ( 
      'paths'='requestBeginTime, requestEndTime, hostname'
    )
  location 's3://my.bucket/' ;

答案 2 :(得分:3)

我只是必须解决同样的问题,而且与JSON SerDes相关的所有内容似乎都不够好。亚马逊可能会很好,但我找不到它的来源(有没有人有链接?)。

HCatalog内置的JsonSerDe正在为我工​​作,尽管我实际上并没有在其他任何地方使用HCatalog。

https://github.com/apache/hcatalog/blob/branch-0.5/core/src/main/java/org/apache/hcatalog/data/JsonSerDe.java

要使用HCatalog的JsonSerDe,请将hcatalog-core .jar添加到Hive的auxpath并创建您的hive表:

$ hive --auxpath /path/to/hcatalog-core.jar

hive (default)>
create table my_table(...)
ROW FORMAT SERDE
  'org.apache.hcatalog.data.JsonSerDe'
...
;

我在这里写了一篇更详细的帖子

http://ottomata.org/tech/too-many-hive-json-serdes/

答案 3 :(得分:2)

Hatalog核心中的Hive 0.12及更高版本具有JsonSerDe,它将序列化和反序列化您的JSON数据。因此,您需要做的就是创建一个外部表,如下例所示:

CREATE EXTERNAL TABLE json_table (
    username string,
    tweet string,
    timestamp long)
ROW FORMAT SERDE
'org.apache.hive.hcatalog.data.JsonSerDe'
STORED AS TEXTFILE
LOCATION
 'hdfs://data/some-folder-in-hdfs'

相应的json数据文件应如下所示:

{"username":"miguno","tweet":"Rock: Nerf paper, scissors is fine.","timestamp": 1366150681 }
{"username":"BlizzardCS","tweet":"Works as intended.  Terran is IMBA.","timestamp": 1366154481 }

答案 4 :(得分:1)

从.json文件

生成SerDe架构

如果.json文件很大,手动编写模式可能会很繁琐。如果是这样,您可以使用这个方便的工具自动生成它。

https://github.com/strelec/hive-serde-schema-gen

答案 5 :(得分:0)

JSON处理功能现在可以在Hive中直接使用。

  

Hive 4.0.0及更高版本

CREATE TABLE ... STORED AS JSONFILE

https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-StorageFormatsStorageFormatsRowFormat,StorageFormat,andSerDe

每个JSON对象必须展平以适合单行(不支持换行符)。这些对象不是正式的JSON数组的一部分。

{"firstName":"John","lastName":"Smith","Age":21}
{"firstName":"Jane","lastName":"Harding","Age":18}

答案 6 :(得分:0)

要使用 JSON 文件制作 Hive 表,您需要专门为您的 JSON 结构编写 CREATE TABLE statement based on HiveQL DDL standards

如果您使用嵌套的 JSON 文件可能会非常复杂,因此我建议您使用这个快速简便的生成器:https://hivetablegenerator.com/

使用 HiveQL 分析 JSON 文件需要 org.openx.data.jsonserde.JsonSerDeorg.apache.hive.hcatalog.data.JsonSerDe 才能正常工作。

org.apache.hive.hcatalog.data.JsonSerDe
这是默认的 JSON SerDe from Apache。这通常用于处理事件等 JSON 数据。这些事件表示为由新行分隔的 JSON 编码文本块。 Hive JSON SerDe 不允许映射或结构键名称中的重复键。

org.openx.data.jsonserde.JsonSerDe
OpenX JSON SerDe 类似于原生的 Apache;但是,它提供了多个可选属性,例如“ignore.malformed.json”、“case.insensitive”等等。在我看来,它通常在处理嵌套的 JSON 文件时效果更好。

采用此示例复杂 JSON 文件:

{
  "schemaVersion": "1.0",
  "id": "07c1687a0fd34ebf8a42e8a8627321dc",
  "accountId": "123456677",
  "partition": "aws",
  "region": "us-west-2",
  "severity": {
      "score": "0",
      "description": "Informational"
  },
  "createdAt": "2021-02-27T18:57:07Z",
  "resourcesAffected": {
      "s3Bucket": {
          "arn": "arn:aws:s3:::bucket-sample",
          "name": "bucket-sample",
          "createdAt": "2020-08-09T07:24:55Z",
          "owner": {
              "displayName": "account-name",
              "id": "919a30c2f56c0b220c32e9234jnkj435n6jk4nk"
          },
          "tags": [],
          "defaultServerSideEncryption": {
              "encryptionType": "AES256"
          },
          "publicAccess": {
              "permissionConfiguration": {
                  "bucketLevelPermissions": {
                      "accessControlList": {
                          "allowsPublicReadAccess": false,
                          "allowsPublicWriteAccess": false
                      },
                      "bucketPolicy": {
                          "allowsPublicReadAccess": true,
                          "allowsPublicWriteAccess": false
                      },
                      "blockPublicAccess": {
                          "ignorePublicAcls": false,
                          "restrictPublicBuckets": false,
                          "blockPublicAcls": false,
                          "blockPublicPolicy": false
                      }
                  },
                  "accountLevelPermissions": {
                      "blockPublicAccess": {
                          "ignorePublicAcls": false,
                          "restrictPublicBuckets": false,
                          "blockPublicAcls": false,
                          "blockPublicPolicy": false
                      }
                  }
              },
              "effectivePermission": "PUBLIC"
          }
      },
      "s3Object": {
          "bucketArn": "arn:aws:s3:::bucket-sample",
          "key": "2021/01/17191133/Camping-Checklist-Google-Docs.pdf",
          "path": "bucket-sample/2021/01/17191133/Camping-Checklist-Google-Docs.pdf",
          "extension": "pdf",
          "lastModified": "2021-01-17T22:11:34Z",
          "eTag": "e8d990704042d2e1b7bb504fb5868095",
          "versionId": "isqHLkSsQUMbbULNT2nMDneMG0zqitbD",
          "serverSideEncryption": {
              "encryptionType": "AES256"
          },
          "size": "150532",
          "storageClass": "STANDARD",
          "tags": [],
          "publicAccess": true
      }
  },
  "category": "CLASSIFICATION",
  "classificationDetails": {
      "jobArn": "arn:aws:macie2:us-west-2:123412341341:classification-job/d6cf41ccc7ea8daf3bd53ddcb86a2da5",
      "result": {
          "status": {
              "code": "COMPLETE"
          },
          "sizeClassified": "150532",
          "mimeType": "application/pdf",
          "sensitiveData": []
      },
      "detailedResultsLocation": "s3://bucket-macie/AWSLogs/123412341341/Macie/us-west-2/d6cf41ccc7ea8daf3bd53ddcb86a2da5/123412341341/50de3137-9806-3e43-9b6e-a6158fdb0e3b.jsonl.gz",
      "jobId": "d6cf41ccc7ea8daf3bd53ddcb86a2da5"
  }
}

需要以下 create table 语句:

CREATE EXTERNAL TABLE IF NOT EXISTS `macie`.`macie_bucket` (
    `schemaVersion` STRING,
    `id` STRING,
    `accountId` STRING,
    `partition` STRING,
    `region` STRING,
    `severity` STRUCT<
    `score`:STRING,
`description`:STRING>,
    `createdAt` STRING,
    `resourcesAffected` STRUCT<
    `s3Bucket`:STRUCT<
    `arn`:STRING,
`name`:STRING,
`createdAt`:STRING,
`owner`:STRUCT<
    `displayName`:STRING,
`id`:STRING>,
`defaultServerSideEncryption`:STRUCT<
    `encryptionType`:STRING>,
`publicAccess`:STRUCT<
    `permissionConfiguration`:STRUCT<
    `bucketLevelPermissions`:STRUCT<
    `accessControlList`:STRUCT<
    `allowsPublicReadAccess`:BOOLEAN,
`allowsPublicWriteAccess`:BOOLEAN>,
`bucketPolicy`:STRUCT<
    `allowsPublicReadAccess`:BOOLEAN,
`allowsPublicWriteAccess`:BOOLEAN>,
`blockPublicAccess`:STRUCT<
    `ignorePublicAcls`:BOOLEAN,
`restrictPublicBuckets`:BOOLEAN,
`blockPublicAcls`:BOOLEAN,
`blockPublicPolicy`:BOOLEAN>>,
`accountLevelPermissions`:STRUCT<
    `blockPublicAccess`:STRUCT<
    `ignorePublicAcls`:BOOLEAN,
`restrictPublicBuckets`:BOOLEAN,
`blockPublicAcls`:BOOLEAN,
`blockPublicPolicy`:BOOLEAN>>>,
`effectivePermission`:STRING>>,
`s3Object`:STRUCT<
    `bucketArn`:STRING,
`key`:STRING,
`path`:STRING,
`extension`:STRING,
`lastModified`:STRING,
`eTag`:STRING,
`versionId`:STRING,
`serverSideEncryption`:STRUCT<
    `encryptionType`:STRING>,
`size`:STRING,
`storageClass`:STRING,
`publicAccess`:BOOLEAN>>,
    `category` STRING,
    `classificationDetails` STRUCT<
    `jobArn`:STRING,
`result`:STRUCT<
    `status`:STRUCT<
    `code`:STRING>,
`sizeClassified`:STRING,
`mimeType`:STRING>,
`detailedResultsLocation`:STRING,
`jobId`:STRING>)
ROW FORMAT SERDE 
     'org.openx.data.jsonserde.JsonSerDe'
LOCATION
     's3://awsexamplebucket1-logs/AWSLogs/'

如果您需要 Amazon 提供有关如何使用 AWS Athena 的嵌套 JSON 文件创建表的更多信息,请查看此链接:https://aws.amazon.com/blogs/big-data/create-tables-in-amazon-athena-from-nested-json-and-mappings-using-jsonserde/