自定义日志行的Grok模式

时间:2016-09-05 12:04:50

标签: logstash logstash-grok

我是grok模式的新手,我正在尝试为下面的自定义日志行编写grok模式。 我想提取日志行中给出的字段值,例如ServiceNameSystemDateSequenceName等,以及TID,[0] [timestamp]。任何帮助都会非常值得赞赏。

日志:

TID: [0] [ESB] [2016-08-16 10:35:10,828] [jms-Worker-2]  INFO {org.apache.synapse.mediators.builtin.LogMediator} -  ServiceName = CustomerService_v1,SystemDate = 8/16/16 10:35 AM,ServerIP = 10.200.42.158,ServerHost = slllasp102.local,SequenceName = SendCustomerToTopic,Message = Going to Send Message to Customer Topic,MessageCode = null,ErrorMessage = null,ErrorDetail = null,ErrorException = null {org.apache.synapse.mediators.builtin.LogMediator}

MyPattern:

\[%{TIMESTAMP_ISO8601:timestamp}\]\s+%{WORD:loglevel}\s+-\s+%{GREEDYDATA:ServiceName}

我无法编写逐个检索字段的正确模式。

1 个答案:

答案 0 :(得分:0)

我完成了你的grok模式,它应该是这样的:

TID: \[%{INT:TID}\] \[ESB\] \[%{TIMESTAMP_ISO8601:timestamp}\]\[jms-Worker-2\]\s+%{WORD:loglevel} {%{GREEDYDATA}} - %{GREEDYDATA:fields} {%{GREEDYDATA}}

然后使用kv filter提取字段。这样做比使用grok过滤器更容易。 配置应如下所示:

kv {
  source => "field" # That's the field I created in the grok filter containing the fields (ServiceName = CustomerService_v1,SystemDate = 8/16/16 10:35 AM...)
  value_split => "="
  field_split => ","
  trimkey => " "
  trim => " "
}

如果您不想使用kv过滤器,则必须将%{GREEDYDATA:fields}替换为\s+ServiceName =%{GREEDYDATA:ServiceName},SystemDate =%{GREEDYDATA:SystemDate},...