由于"编码问题" lambda函数无法将firehose日志索引到AWS托管ES中。
当我从firehose logEvent
base64编码单个record
并将收集的记录发送到AWS托管ES时,我没有收到任何错误。
有关详细信息,请参阅下一节。
基本64位编码的压缩有效负载正在发送到ES,因为生成的json转换对于ES来说太大而无法索引 - see this ES link。
我从AWS托管ES中收到以下错误:
{
"deliveryStreamARN": "arn:aws:firehose:us-west-2:*:deliverystream/*",
"destination": "arn:aws:es:us-west-2:*:domain/*",
"deliveryStreamVersionId": 1,
"message": "The data could not be decoded as UTF-8",
"errorCode": "InvalidEncodingException",
"processor": "arn:aws:lambda:us-west-2:*:function:*"
}
如果未压缩输出记录,the body size is too long
(小至14MB)。如果没有压缩和简单的base64编码有效负载,我会在Lambda日志中收到以下错误:
{
"type": "mapper_parsing_exception",
"reason": "failed to parse",
"caused_by": {
"type": "not_x_content_exception",
"reason": "Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"
}
}
我有Cloudwatch日志,它们被大小/间隔缓冲,并被送入Kinesis Firehose。 firehose将日志传输到lambda函数,该函数将日志转换为json记录,然后将其发送到AWS托管的Elasticsearch集群。
lambda函数获得以下JSON结构:
{
"invocationId": "cf1306b5-2d3c-4886-b7be-b5bcf0a66ef3",
"deliveryStreamArn": "arn:aws:firehose:...",
"region": "us-west-2",
"records": [{
"recordId": "49577998431243709525183749876652374166077260049460232194000000",
"approximateArrivalTimestamp": 1508197563377,
"data": "some_compressed_data_in_base_64_encoding"
}]
}
lambda函数然后提取.records[].data
并将数据解码为base64并解压缩导致以下JSON的数据:
{
"messageType": "DATA_MESSAGE",
"owner": "aws_account_number",
"logGroup": "some_cloudwatch_log_group_name",
"logStream": "i-0221b6ec01af47bfb",
"subscriptionFilters": [
"cloudwatch_log_subscription_filter_name"
],
"logEvents": [
{
"id": "33633929427703365813575134502195362621356131219229245440",
"timestamp": 1508197557000,
"message": "Oct 16 23:45:57 some_log_entry_1"
},
{
"id": "33633929427703365813575134502195362621356131219229245441",
"timestamp": 1508197557000,
"message": "Oct 16 23:45:57 some_log_entry_2"
},
{
"id": "33633929427703365813575134502195362621356131219229245442",
"timestamp": 1508197557000,
"message": "Oct 16 23:45:57 some_log_entry_3"
}
]
}
来自.logEvents[]
的单个项目被转换为json结构,其中在Kibana中搜索日志时键是所需的列 - 如下所示:
{
'journalctl_host': 'ip-172-11-11-111',
'process': 'haproxy',
'pid': 15507,
'client_ip': '172.11.11.111',
'client_port': 3924,
'frontend_name': 'http-web',
'backend_name': 'server',
'server_name': 'server-3',
'time_duration': 10,
'status_code': 200,
'bytes_read': 79,
'@timestamp': '1900-10-16T23:46:01.0Z',
'tags': ['haproxy'],
'message': 'HEAD / HTTP/1.1'
}
转换后的json被收集到一个数组中,该数组获得zlib压缩和base64编码的字符串,然后将其转换为新的json有效负载作为最终的lambda结果:
{
"records": [
{
"recordId": "49577998431243709525183749876652374166077260049460232194000000",
"result": "Ok",
"data": "base64_encoded_zlib_compressed_array_of_transformed_logs"
}
]}
13个日志条目(~4kb)可以转换为大约635kb。
我还减少了awslogs的阈值,希望发送给Lambda函数的日志大小变小:
buffer_duration = 10
batch_count = 10
batch_size = 500
不幸的是,当有爆发时 - 尖峰可以超过2800行,其中大小超过1MB。
当lambda函数产生的有效负载为"太大" (约13mb的转换日志),在lambda cloudwatch日志中记录错误 - "体型太长"。似乎没有任何迹象表明此错误的来源或lambda fn的响应有效负载是否有大小限制。
答案 0 :(得分:0)
因此,AWS支持人员告诉我,可以减轻以下限制来解决此问题:
相反,我已将架构修改为以下内容: