如何为弹性搜索服务在Logstash中过滤AWS Lambda和API网关日志?

时间:2019-03-23 09:27:21

标签: logstash elastic-stack logstash-grok

我在S3存储桶中有AWS Lambda和API网关日志,我托管了logstash,并将日志从s3移到elasticsearch服务中以进行集中日志记录。我想在logstash中过滤Lambda和api网关的日志,因此在弹性搜索中,我可以轻松找到它们。

下面是我在kibana的“邮件”字段中获取并想要过滤的apigateway日志

{"messageType":"DATA_MESSAGE",
"owner":"144360258",
"logGroup":"API-Gateway-Execution-Logs_x63d3nk/live",
"logStream":"d645920e395fedad7bbbed0eca3fe2e0","subscriptionFilters":["API-Gateway-Execution-Logs_x63d3nr84klive"],
"logEvents":[{"id":"3463781636667557636544562987631175646966498","timestamp":1553213404230,"message":"(d7b307ed-4c36-11e9-bb5e-b7d673a) Extended Request Id: W6sqaGhwDoEFavA="},
             {"id":"3463781636781291437057069165653007810157004","timestamp":1553213404281,"message":"(d7b307ed-4c36-11e9-bb5e-b7d673a) Verifying Usage Plan for request: d7b307ed-4c36-11e9-bb5e-b7d673a. API Key:  API Stage: x63d3nk/live"},
             {"id":"3463781636781291437057069165653007810157004","timestamp":1553213404282,"message":"(d7b307ed-4c36-11e9-bb5e-b7d673a) API Key  authorized because method 'OPTIONS /v2' does not require API Key. Request will not contribute to throttle or quota limits"},
             {"id":"3463781636781291437057069165653007810157004","timestamp":1553213404282,"message":"(d7b307ed-4c36-11e9-bb5e-b7d673a) Usage Plan check succeeded for API Key  and API Stage x63d3nk/live"},
             {"id":"3463781636781291437057069165653007810157004","timestamp":1553213404282,"message":"(d7b307ed-4c36-11e9-bb5e-b7d673a) Starting execution for request: d7b307ed-4c36-11e9-bb5e-b7d673a"},
             {"id":"3463781636781291437057069165653007810157004","timestamp":1553213404282,"message":"(d7b307ed-4c36-11e9-bb5e-b7d673a) HTTP Method: OPTIONS, Resource Path: /api/v2"},{"id":"346378163678352151157698359732240390","timestamp":1553213404282,"message":"(d7b307ed-4c36-11e9-bb5e-b7d673a) Successfully completed execution"},
             {"id":"3463781636781291437057069165653007810157004","timestamp":1553213404282,"message":"(d7b307ed-4c36-11e9-bb5e-b7d673a) Method completed with status: 200"}]
}

filter i tried with
filter {
    grok {
        match => { "message" => "%{GREEDYDATA:wd}" }
    }
    json{
        source => "wd"
        target => "js"
    }
    mutate {
        add_field => { "t1" => "%{[js][logEvents][message]}"}
    }
}

2 个答案:

答案 0 :(得分:1)

我已经使用json,split进行了解析,并使用了mutate使用了值 https://www.elastic.co/guide/en/logstash/current/plugins-filters-split.html

filter {
    json {
        source => "message"
    }
    split {
        field => "logEvents"
    }
    mutate {
        add_field => ["time", "%{[logEvents][timestamp]}"]
    }

答案 1 :(得分:0)

我建议在Logstash配置文件中使用Grok。

Grok是一种将非结构化日志数据解析为结构化和可查询内容的好方法。

使用这些链接来构建日志解析器

Documentation

Example