Docker中的Logstash - 将2个事件合并为1个事件

时间:2017-10-20 14:14:02

标签: elasticsearch logstash

我通过官方图片在Docker中运行ElasticStack;但是,当我尝试使用Logstash - 聚合插件来组合具有相同RequestID的事件时,我当前收到以下错误消息:

  

无法创建管道{:reason =>"无法找到任何名为' aggregate'的过滤器插件。你确定这是对的吗?尝试加载聚合过滤器插件导致此错误:加载所请求的插件的问题名为聚合类型过滤器。错误:NameError NameError"}

那就是说,我也不是100%确定如何使用Logstash - 聚合插件将以下示例事件合并到一个事件中:

{
    "@t": "2017-10-16T20:21:35.0531946Z",
    "@m": "HTTP GET Request: \"https://myapi.com/?format=json&trackid=385728443\"",
    "@i": "29b30dc6",
    "Url": "https://myapi.com/?format=json&trackid=385728443",
    "SourceContext": "OpenAPIClient.Client",
    "ActionId": "fd683cc6-9e59-427f-a9f4-7855663f3568",
    "ActionName": "Web.Controllers.API.TrackController.TrackRadioLocationGetAsync (Web)",
    "RequestId": "0HL8KO13F8US6:0000000E",
    "RequestPath": "/api/track/radiourl/385728443"
}
{
    "@t": "2017-10-16T20:21:35.0882617Z",
    "@m": "HTTP GET Response: LocationAPIResponse { Location: \"http://sample.com/file/385728443/\", Error: null, Success: True }",
    "@i": "84f6b72b",
    "Response":
    {
        "Location": "http://sample.com/file/385728443/",
        "Error": null,
        "Success": true,
        "$type": "LocationAPIResponse"
    },
    "SourceContext": "OpenAPIClient.Client",
    "ActionId": "fd683cc6-9e59-427f-a9f4-7855663f3568",
    "ActionName": "Web.Controllers.API.TrackController.TrackRadioLocationGetAsync (Web)",
    "RequestId": "0HL8KO13F8US6:0000000E",
    "RequestPath": "/api/track/radiourl/385728443"
}

有人可以指导我如何正确组合这些事件,如果聚合是正确的插件,为什么内置插件似乎不是Logstash Docker镜像的一部分?

docker-compose.yml内容:

 version: '3'
 services:
   elasticsearch:
     image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
     container_name: elasticsearch
     environment:
       - discovery.type=single-node
       - xpack.security.enabled=false
     ports:
       - 9200:9200
     restart: always
   logstash:
     image: docker.elastic.co/logstash/logstash:5.6.3
     container_name: logstash
     environment:
       - xpack.monitoring.elasticsearch.url=http://elasticsearch:9200
     depends_on:
       - elasticsearch
     ports:
       - 10000:10000
     restart: always
     volumes:
       - ./logstash/pipeline/:/usr/share/logstash/pipeline/
   kibana:
     image: docker.elastic.co/kibana/kibana:5.6.3
     container_name: kibana
     environment:
       - xpack.monitoring.elasticsearch.url=http://elasticsearch:9200
     depends_on:
       - elasticsearch
     ports:
       - 5601:5601
     restart: always

logstash / pipeline / empstore.conf内容:

 input {
     http {
         id => "empstore_http"
         port => 10000
         codec => "json"
     }
 }

 output {
     elasticsearch {
         hosts => [ "elasticsearch:9200" ]
         id => "empstore_elasticsearch"
         index => "empstore-openapi"
     }
 }

 filter {
     mutate {
         rename => { "RequestId" => "RequestID" }
     }

    aggregate {
         task_id => "%{RequestID}"
         code => ""
     }
 }

1 个答案:

答案 0 :(得分:0)

过滤器中的代码是必需的设置。

代码示例:

  • Request_END:

    code => " map [' sql_duration'] + = event.get(' duration')"

  • Request_START:

    code => " map [' sql_duration'] = 0"

  • 请求:

    code => " map [' sql_duration'] + = event.get(' duration')"