在logstash中使用grok过滤器解析日期时出错

时间:2017-04-03 02:25:51

标签: logstash logstash-grok

我需要解析日志中的日期和时间戳,以便在"2010-08-18","00:01:55","text" 字段中显示。我能够解析时间戳而不是日期。

输入日志:

grok {
  match => { "message" => '"(%{DATE})","(%{TIME})","(%{GREEDYDATA:message3})"’}
}

我的过滤器:

DATE

此处grokparsefailure抛出@timestamp。 还不确定如何更新[#<Stripe::Card:0x3fc1da530e18 id=card_1A4KjTLcGwfBVD0DjTHzDqsO> JSON: { "id": "card_1A4KjTLcGwfBVD0DjTHzDqsO", "object": "card", "address_city": null, "address_country": null, "address_line1": null, "address_line1_check": null, "address_line2": null, "address_state": null, "address_zip": "42424", "address_zip_check": "pass", "brand": "Visa", "country": "US", "customer": "cus_AP91mImLV1GIrS", "cvc_check": "pass", "dynamic_last4": null, "exp_month": 4, "exp_year": 2024, "fingerprint": "U2Lh3jtN9G5jgtxm", "funding": "credit", "last4": "4242", "metadata": {}, "name": null, "tokenization_method": null }] 字段。

感谢您的帮助。

1 个答案:

答案 0 :(得分:0)

%{DATE}模式不是您想要的。它正在寻找M / D / Y,M-D-Y,D-M-Y或D / M / Y格式的东西。

对于这样的文件,您可以考虑使用csv过滤器:

filter {
  csv {
    columns => ["date","time","message3"]
    add_filed => {
       "date_time" => "%{date} %{time}"
    }
  }
  date {
     match => [ "date_time", "yyyy-MM-dd HH:mm:ss" ]
     remove_field => ["date", "time", "date_time" ]
  }
}

这将处理message3已嵌入引号的情况。