如何基于时间戳解析多线ELK

时间:2017-07-04 10:35:35

标签: logging elastic-stack filebeat

我的日志是

2017-07-04 10:19:52,896 - [INFO] - from application in ForkJoinPool-3-worker-1

Resolving database...

2017-07-04 10:19:52,897 - [INFO] - from application in ForkJoinPool-3-worker-1

Resolving database...

2017-07-04 10:19:52,897 - [DEBUG] - from application in ForkJoinPool-3-worker-1

Json Body : {"took":2,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":0,"max_score":null,"hits":[]},"aggregations":{"fp":{"doc_count_error_upper_bound":0,"sum_other_doc_count":0,"buckets":[]}}}

2017-07-04 10:19:52,898 - [DEBUG] - from application in application-akka.actor.default-dispatcher-53

Successfully updated the transaction.

2017-07-04 10:19:52,899 - [INFO] - from application in ForkJoinPool-3-worker-1

Resolving database...

2017-07-04 10:19:52,901 - [DEBUG] - from application in application-akka.actor.default-dispatcher-54

Successfully updated the transaction.

我想将两个时间戳之间的所有日志分组在一起 GREEDYDATA。 我正在使用带ELK的filebeat

1 个答案:

答案 0 :(得分:0)

我已通过以下配置

解决了这个问题

-

  paths: 
    - /var/www/aspserver/logs/application.log
  document_type: asp
  input_type: log
  multiline:
    pattern: '^[0-9]'
    negate: true
    match: after

在以数字开头的行之后匹配所有行并将它们合并在一起。

logstash filter : 
filter {
  if [type] == "asp" {
    grok {
      patterns_dir => "/etc/logstash/conf.d/patterns"
      match => { "message" => "%{JAVASTACKTRACEPART}" }
    }
  }
}

吞噬所有日志