我正在尝试使用logstash,我的应用程序具有以下类型的日志。这里5表示将跟随另外5行,这些行是针对不同相关事物收集的统计数据。
这些基本上是应用程序统计信息,每行指示一个资源。
有没有办法使用logstash正确解析它,以便它可以用于弹性搜索?
[20170502 01:57:26.209 EDT (thread-name) package-name.classname#MethodName INFO] Some info line (5 stats):
[fieldA: strvalue1| field2: 0 | field3: 0 | field4: 0 | field5: 0 | field6: 0 | field7: 0]
[fieldA: strvalue2| field2: 0 | field3: 0 | field4: 0 | field5: 0 | field6: 0 | field7: 0]
[fieldA: strvalue3| field2: 0 | field3: 0 | field4: 0 | field5: 0 | field6: 0 | field7: 0]
[fieldA: strvalue4| field2: 0 | field3: 0 | field4: 0 | field5: 0 | field6: 0 | field7: 0]
[fieldA: strvalue5| field2: 0 | field3: 0 | field4: 0 | field5: 0 | field6: 0 | field7: 0]
编辑:
这是我正在使用的配置,第一组统计数据正在被正确解析但在该管道被卡住之后。请注意有150个这样的日志,但如果我只保留2-3,那么它工作正常。你能帮我解决一下这个问题吗?
# [20170513 06:08:29.734 EDT (StatsCollector-1) deshaw.tools.jms.ActiveMQLoggingPlugin$ActiveMQDestinationStatsCollector#logPerDestinationStats INFO] ActiveMQ Destination Stats (97 destinations):
# [destName: topic://darts.metaDataChangeTopic | enqueueCount: 1 | dequeueCount: 1 | dispatchCount: 1 | expiredCount: 0 | inflightCount: 0 | msgsHeld: 0 | msgsCached: 0 | memoryPercentUsage: 0 | memoryUsage: 0 | memoryLimit: 536870912 | avgEnqueueTimeMs: 0.0 | maxEnqueueTimeMs: 0 | minEnqueueTimeMs: 0 | currentConsumers: 1 | currentProducers: 0 | blockedSendsCount: 0 | blockedSendsTimeMs: 0 | minMsgSize: 2392 | maxMsgSize: 2392 | avgMsgSize: 2392.0 | totalMsgSize: 2392]
input {
file {
path => "/u/bansalp/activemq_primary_plugin.stats.log.1"
### For testing and continual process of the same file, remove these before produciton
start_position => "beginning"
sincedb_path => "/dev/null"
### Lets read the logfile and recombine multi line details
codec => multiline {
# Grok pattern names are valid! :)
pattern => "^\[destName:"
negate => false
what => "previous"
}
}
}
filter {
if ([message] =~ /^\s*$/ ){
drop{}
}
if ([message] =~ /^[^\[]/) {
drop{}
}
if ([message] =~ /logMemoryInfo|logProcessInfo|logSystemInfo|logThreadBreakdown|logBrokerStats/) {
drop{}
}
if [message] =~ "logPerDestinationStats" {
grok {
match => { "message" => "^\[%{YEAR:yr}%{MONTHNUM:mnt}%{MONTHDAY:daynum}\s*%{TIME:time}\s*%{TZ:timezone}\s*(%{DATA:thread_name})\s*%{JAVACLASS:javaclass}#%{WORD:method}\s*%{LOGLEVEL}\]\s*"
}
}
split {
field => "message"
}
grok {
match => { "message" => "^\[%{DATA}:\s*%{DATA:destName}\s*\|\s*%{DATA}:\s*%{NUMBER:enqueueCount}\s*\|\s*%{DATA}:\s*%{NUMBER:dequeueCount}\s*\|\s*%{DATA}:\s*%{NUMBER:dispatchCount}\s*\|\s*%{DATA}:\s*%{NUMBER:expiredCount}\s*\|\s*%{DATA}:\s*%{NUMBER:inflightCount}\s*\|\s*%{DATA}:\s*%{NUMBER:msgsHeld}\s*\|\s*%{DATA}:\s*%{NUMBER:msgsCached}\s*\|\s*%{DATA}:\s*%{NUMBER:memoryPercentUsage}\s*\|\s*%{DATA}:\s*%{NUMBER:memoryUsage}\s*\|\s*%{DATA}:\s*%{NUMBER:memoryLimit}\s*\|\s*%{DATA}:\s*%{NUMBER:avgEnqueueTimeMs}\s*\|\s*%{DATA}:\s*%{NUMBER:maxEnqueueTimeMs}\s*\|\s*%{DATA}:\s*%{NUMBER:minEnqueueTimeMs}\s*\|\s*%{DATA}:\s*%{NUMBER:currentConsumers}\s*\|\s*%{DATA}:\s*%{NUMBER:currentProducers}\s*\|\s*%{DATA}:\s*%{NUMBER:blockedSendsCount}\s*\|\s*%{DATA}:\s*%{NUMBER:blockedSendsTimeMs}\s*\|\s*%{DATA}:\s*%{NUMBER:minMsgSize}\s*\|\s*%{DATA}:\s*%{NUMBER:maxMsgSize}\s*\|\s*%{DATA}:\s*%{NUMBER:avgMsgSize}\s*\|\s*%{DATA}:\s*%{NUMBER:totalMsgSize}\]$" }
}
mutate {
convert => { "message" => "string" }
add_field => {
"session_timestamp" => "%{yr}-%{mnt}-%{daynum} %{time} %{timezone}"
"load_timestamp" => "%{@timestamp}"
}
remove_field => ["yr","mnt", "daynum", "time", "timezone"]
}
}
}
output {
stdout {codec => rubydebug}
}
答案 0 :(得分:1)
当然有。
您需要做的是使用输入过滤器上的multiline codec。
根据例子:
input {
file {
path => "/var/log/someapp.log"
codec => multiline {
# Grok pattern names are valid! :)
pattern => "^\[%{YEAR}%{MONTHNUM}%{MONTHDAY}\s*%{TIME}"
negate => true
what => previous
}
}
}
这基本上表明任何不以YYYYMMDD HH开头的行:mi:ss.000将与前一行合并
从那里,您现在可以将Grok模式应用于第一行(以获取高级数据)。
如果您对第一行所需的所有数据感到满意,则可以在\ r或\ n上拆分并使用单个grok模式获取各个统计数据(基于您在上面提供的示例) )。
希望这有帮助
d
更新2017-05-08 11:54:
完整的logstash conf可能看起来像这样,您需要考虑更改grok模式以更好地满足您的要求(只有您知道您的数据)。
注意,这还没有经过测试,我把它留给你。
input {
file {
path => "/var/log/someapp.log"
### For testing and continual process of the same file, remove these before produciton
start_position => "beginning"
sincedb_path => "/dev/null"
### Lets read the logfile and recombine multi line details
codec => multiline {
# Grok pattern names are valid! :)
pattern => "^\[%{YEAR}%{MONTHNUM}%{MONTHDAY}\s*%{TIME}"
negate => true
what => previous
}
}
}
filter {
### Let's get some high level data before we split the line (note: anything you grab before the split gets copied)
grok {
match => { "message" => "^\[%{YEAR:yr}%{MONTHNUM:mnt}%{MONTHDAY:daynum}\s*%{TIME:time}\s*%{TZ:timezone}\s*(%{DATA:thread_name})\s*%{JAVACLASS:javaclass}#%{WORD:method}\s*%{LOGLEVEL}\]"
}
}
### Split the lines back out to being a single line now. (this may be a \r or \n, test which one)
split {
"field" => "message"
"terminator" => "\r"
}
### Ok, the lines should now be independent, lets add another grok here to get the patterns as dictated by your example [fieldA: str | field2: 0...] etc.
### Note: you should look to change the grok pattern to better suit your requirements, I used DATA here to quickly capture your content
grok {
break_on_match => false
match => { "message" => "^\[%{DATA}:\s*%{DATA:fieldA}\|%{DATA}:\s*%{DATA:field2}\|%{DATA}:\s*%{DATA:field3}\|%{DATA}:\s*%{DATA:field4}\|%{DATA}:\s*%{DATA:field5}\|%{DATA}:\s*%{DATA:field6}\|%{DATA}:\s*%{DATA:field7}\]$" }
}
mutate {
convert => { "message" => "string" }
add_field => {
"session_timestamp" => "%{yr}-%{mnt}-%{daynum} %{time} %{timezone}"
"load_timestamp" => "%{@timestamp}"
}
remove_field => ["yr","mnt", "daynum", "time", "timezone"]
}
}
output {
stdout { codec => rubydebug }
}
编辑2017-05-15
Logstash是一个复杂的解析器,它希望保持一个进程并持续监视日志文件(因此你必须将其崩溃)
中断匹配意味着你可以对同一行有多个匹配要求,如果它没有找到匹配它会尝试列表中的下一个(总是复杂到简单)
您的输入过滤器,将路径更改为以.log *结尾,同样,根据您的原始示例,模式不必与所需的日期格式匹配(为了将所有关联带到一行)
你的过滤器应该指定我相信的分割字符(否则我认为默认是逗号)。
input {
file {
path => "/u/bansalp/activemq_primary_plugin.stats.log*"
### For testing and continual process of the same file, remove these before production
start_position => "beginning"
sincedb_path => "/dev/null"
### Lets read the logfile and recombine multi line details
codec => multiline {
# Grok pattern names are valid! :)
pattern => "^\[destName:"
negate => false
what => "previous"
}
}
}
filter {
if "logPerDestinationStats" in [message] {
grok {
match => { "message" => "^\[%{YEAR:yr}%{MONTHNUM:mnt}%{MONTHDAY:daynum}\s*%{TIME:time}\s*%{TZ:timezone}\s*(%{DATA:thread_name})\s*%{JAVACLASS:javaclass}#%{WORD:method}\s*%{LOGLEVEL}\]\s*"
}
}
split {
field => "message"
terminator => "\r”
}
grok {
match => { "message" => "^\[%{DATA}:\s*%{DATA:destName}\s*\|\s*%{DATA}:\s*%{NUMBER:enqueueCount}\s*\|\s*%{DATA}:\s*%{NUMBER:dequeueCount}\s*\|\s*%{DATA}:\s*%{NUMBER:dispatchCount}\s*\|\s*%{DATA}:\s*%{NUMBER:expiredCount}\s*\|\s*%{DATA}:\s*%{NUMBER:inflightCount}\s*\|\s*%{DATA}:\s*%{NUMBER:msgsHeld}\s*\|\s*%{DATA}:\s*%{NUMBER:msgsCached}\s*\|\s*%{DATA}:\s*%{NUMBER:memoryPercentUsage}\s*\|\s*%{DATA}:\s*%{NUMBER:memoryUsage}\s*\|\s*%{DATA}:\s*%{NUMBER:memoryLimit}\s*\|\s*%{DATA}:\s*%{NUMBER:avgEnqueueTimeMs}\s*\|\s*%{DATA}:\s*%{NUMBER:maxEnqueueTimeMs}\s*\|\s*%{DATA}:\s*%{NUMBER:minEnqueueTimeMs}\s*\|\s*%{DATA}:\s*%{NUMBER:currentConsumers}\s*\|\s*%{DATA}:\s*%{NUMBER:currentProducers}\s*\|\s*%{DATA}:\s*%{NUMBER:blockedSendsCount}\s*\|\s*%{DATA}:\s*%{NUMBER:blockedSendsTimeMs}\s*\|\s*%{DATA}:\s*%{NUMBER:minMsgSize}\s*\|\s*%{DATA}:\s*%{NUMBER:maxMsgSize}\s*\|\s*%{DATA}:\s*%{NUMBER:avgMsgSize}\s*\|\s*%{DATA}:\s*%{NUMBER:totalMsgSize}\]$" }
}
mutate {
convert => { "message" => "string" }
add_field => {
"session_timestamp" => "%{yr}-%{mnt}-%{daynum} %{time} %{timezone}"
"load_timestamp" => "%{@timestamp}"
}
remove_field => ["yr","mnt", "daynum", "time", "timezone"]
}
}
else {
drop{}
}
}
请原谅我目前通过手机更新格式的格式,我很高兴有人代替我更新格式。