如何使用grok /正则表达式在日志文件中拆分json值

时间:2016-12-23 10:20:28

标签: regex logstash logstash-grok

我有一个日志文件,我需要从文件中提取json内容,我需要使用logstash json过滤器解析它。我写了一个grok模式,但它没有正常工作。以下是我的日志文件。

2016-12-18 12:13:52.313 -08:00 [Information] 636176600323139749 1b2c4c40-3da6-46ff-b93f-0eb07a57f2a3 18 - API: GET https://aaa.com/o/v/S?$filter=uid eq '9'&$expand=org($filter=org eq '0')
{
  "Id": "1b",
  "App": "D",
  "User": "",
  "Machine": "DC",
  "RequestIpAddress": "xx.xxx.xxx",
  "RequestHeaders": {
  "Cache-Control": "no-transform",
  "Connection": "close",
  "Accept": "application/json"
},
  "RequestTimestamp": "2016-12-18T12:13:52.2609587-08:00",
  "ResponseContentType": "application/json",
  "ResponseContentBody": {
  "@od","value":[
    {
      "uid":"","sId":"10,org":[
        {
          "startDate":"2015-02-27T08:00:00Z","Code":"0","emailId":"xx@gg.COM"
        }
      ]
    }
  ]
},
  "ResponseStatusCode": 200,
  "ResponseHeaders": {
  "Content-Type": "application/json;"
},
  "ResponseTimestamp": "2016-12-18T12:13:52.3119655-08:00"
}

我的Grok模式

grok {              
         match => [ "message","%{TIMESTAMP_ISO8601:exclude}%{GREEDYDATA:exclude1}(?<exclude2>[\s])(?<json_value>[\W\w]+)"]                      
    }

1 个答案:

答案 0 :(得分:0)

假设这是一条消息(它不是多行,或者之前已经合并过),并且在URI和json之间有一个空格,这个grok模式应该可以工作:

%{TIMESTAMP_ISO8601} %{NOTSPACE:timezone} \[%{WORD:severity}] %{WORD:field1} %{UUID:field2} %{NUMBER:field3} - API: %{WORD:verb} (?<field4>[^\{]*) %{GREEDYDATA:json}

使用%{URI}本来不错,但你拥有的字符串不是有效的URI(它包含非转义空格)。