我正在试图找出用于解析多个消息的grok模式,例如异常跟踪&下面是一个这样的日志
2017-03-30 14:57:41 [12345] [qtp1533780180-12] ERROR com.app.XYZ - Exception occurred while processing
java.lang.NullPointerException: null
at spark.webserver.MatcherFilter.doFilter(MatcherFilter.java:162)
at spark.webserver.JettyHandler.doHandle(JettyHandler.java:61)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:189)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:119)
at org.eclipse.jetty.server.Server.handle(Server.java:517)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:302)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:242)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:245)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:75)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:213)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:147)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)
这是我的logstash.conf
input {
file {
path => ["/debug.log"]
codec => multiline {
# Grok pattern names are valid! :)
pattern => "^%{TIMESTAMP_ISO8601} "
negate => true
what => previous
}
}
}
filter {
mutate {
gsub => ["message", "r", ""]
}
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} \[%{NOTSPACE:uid}\] \[%{NOTSPACE:thread}\] %{LOGLEVEL:loglevel} %{DATA:class}\-%{GREEDYDATA:message}" ]
overwrite => [ "message" ]
}
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
}
}
output {
elasticsearch { hosts => localhost }
stdout { codec => rubydebug }
}
这适用于单行日志解析,但在
中失败0]“_ grokparsefailure”
用于多行异常跟踪
有人可以建议我使用正确的过滤模式来解析多行日志吗?
答案 0 :(得分:1)
如果您正在使用Multiline日志,请使用logstash提供的Multiline过滤器。首先需要区分多行过滤器中新记录的开始。从您的日志中我可以看到新记录以“TIMESTAMP”开头,下面是示例用法。
示例用法::
filter {
multiline {
type => "/debug.log"
pattern => "^%{TIMESTAMP}"
what => "previous"
}
}
然后,您可以使用Gsub替换“\ n”和“\ r \ n”,它们将通过多行过滤器添加到您的记录中。之后使用Grok。
答案 1 :(得分:0)
删除
后,上面的logstash配置工作正常mutate { gsub => [“message”,“r”,“”] }
因此工作的logstash配置用于解析单行&上述日志模式的多行输入
input {
file {
path => ["./debug.log"]
codec => multiline {
# Grok pattern names are valid! :)
pattern => "^%{TIMESTAMP_ISO8601} "
negate => true
what => previous
}
}
}
filter {
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} \[%{NOTSPACE:uid}\] \[%{NOTSPACE:thread}\] %{LOGLEVEL:loglevel} %{DATA:class}\-%{GREEDYDATA:message}" ]
overwrite => [ "message" ]
}
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
}
}
output {
elasticsearch { hosts => localhost }
stdout { codec => rubydebug }
}