Kinesis Firehose将JSON对象放入S3而不使用分隔符逗号

时间:2018-01-12 12:38:37

标签: json amazon-web-services aws-api-gateway amazon-kinesis amazon-kinesis-firehose

在发送数据之前,我使用的是JSON.stringify数据,它看起来像这样

{"data": [{"key1": value1, "key2": value2}, {"key1": value1, "key2": value2}]}

但是一旦它通过AWS API Gateway并且Kinesis Firehose将它放到S3就看起来像这样

    {
     "key1": value1, 
     "key2": value2
    }{
     "key1": value1, 
     "key2": value2
    }

JSON对象之间的分隔符逗号消失了,但我需要它来正确处理数据。

API网关中的模板:

#set($root = $input.path('$'))
{
    "DeliveryStreamName": "some-delivery-stream",
    "Records": [
#foreach($r in $root.data)
#set($data = "{
    ""key1"": ""$r.value1"",
    ""key2"": ""$r.value2""
}")
    {
        "Data": "$util.base64Encode($data)"
    }#if($foreach.hasNext),#end
#end
    ]
}

3 个答案:

答案 0 :(得分:1)

我最近遇到了同样的问题,我能找到的唯一答案基本上只是在每个JSON消息的末尾添加换行符(" \ n"),只要你将它们发布到Kinesis流,或使用某种原始JSON解码器方法,可以处理没有分隔符的连接JSON对象。

我发布了一个python代码解决方案,可以在相关的Stack Overflow帖子上找到: https://stackoverflow.com/a/49417680/1546785

答案 1 :(得分:0)

一旦AWS Firehose将JSON对象转储到s3,就完全有可能从文件中读取单个JSON对象。

使用 Python ,您可以使用raw_decode包中的json函数

from json import JSONDecoder, JSONDecodeError
import re
import json
import boto3

NOT_WHITESPACE = re.compile(r'[^\s]')

def decode_stacked(document, pos=0, decoder=JSONDecoder()):
    while True:
        match = NOT_WHITESPACE.search(document, pos)
        if not match:
            return
        pos = match.start()

        try:
            obj, pos = decoder.raw_decode(document, pos)
        except JSONDecodeError:
            # do something sensible if there's some error
            raise
        yield obj

s3 = boto3.resource('s3')

obj = s3.Object("my-bukcet", "my-firehose-json-key.json")
file_content = obj.get()['Body'].read()
for obj in decode_stacked(file_content):
    print(json.dumps(obj))
    #  { "key1":value1,"key2":value2}
    #  { "key1":value1,"key2":value2}

来源:https://stackoverflow.com/a/50384432/1771155

使用胶水/ Pyspark ,您可以使用

import json

rdd = sc.textFile("s3a://my-bucket/my-firehose-file-containing-json-objects")
df = rdd.map(lambda x: json.loads(x)).toDF()
df.show()

来源:https://stackoverflow.com/a/62984450/1771155

答案 2 :(得分:0)

您可以考虑的一种方法是,通过添加Lambda函数作为其数据处理器,为Kinesis Firehose传递流配置数据处理,该函数将在最终将数据传递到S3存储桶之前执行。

DeliveryStream:
  ...
  Type: AWS::KinesisFirehose::DeliveryStream
  Properties:
    DeliveryStreamType: DirectPut
    ExtendedS3DestinationConfiguration:
      ...
      BucketARN: !GetAtt MyDeliveryBucket.Arn
      ProcessingConfiguration:
        Enabled: true
        Processors:
          - Parameters:
              - ParameterName: LambdaArn
                ParameterValue: !GetAtt MyTransformDataLambdaFunction.Arn
            Type: Lambda
    ...

然后在Lambda函数中,确保将'\n'附加到记录的JSON字符串中,请参见Node.js中Lambda函数myTransformData.ts的下方:

import {
  FirehoseTransformationEvent,
  FirehoseTransformationEventRecord,
  FirehoseTransformationHandler,
  FirehoseTransformationResult,
  FirehoseTransformationResultRecord,
} from 'aws-lambda';

const createDroppedRecord = (
  recordId: string
): FirehoseTransformationResultRecord => {
  return {
    recordId,
    result: 'Dropped',
    data: Buffer.from('').toString('base64'),
  };
};

const processData = (
  payloadStr: string,
  record: FirehoseTransformationEventRecord
) => {
  let jsonRecord;
  // ...
  // Process the orginal payload,
  // And create the record in JSON
  return jsonRecord;
};

const transformRecord = (
  record: FirehoseTransformationEventRecord
): FirehoseTransformationResultRecord => {
  try {
    const payloadStr = Buffer.from(record.data, 'base64').toString();
    const jsonRecord = processData(payloadStr, record);
    if (!jsonRecord) {
      console.error('Error creating json record');
      return createDroppedRecord(record.recordId);
    }
    return {
      recordId: record.recordId,
      result: 'Ok',
      // Ensure that '\n' is appended to the record's JSON string.
      data: Buffer.from(JSON.stringify(jsonRecord) + '\n').toString('base64'),
    };
  } catch (error) {
    console.error('Error processing record ${record.recordId}: ', error);
    return createDroppedRecord(record.recordId);
  }
};

const transformRecords = (
  event: FirehoseTransformationEvent
): FirehoseTransformationResult => {
  let records: FirehoseTransformationResultRecord[] = [];
  for (const record of event.records) {
    const transformed = transformRecord(record);
    records.push(transformed);
  }
  return { records };
};

export const handler: FirehoseTransformationHandler = async (
  event,
  _context
) => {
  const transformed = transformRecords(event);
  return transformed;
};

一旦使用了换行符分隔符,Athena之类的AWS服务将能够正确处理S3存储桶(而非just seeing the first JSON record only)中的JSON记录数据。