我正在尝试使用grok解析日志文件。我使用的配置允许我解析单个有衬里的事件,但如果是多行的(使用java堆栈跟踪)则不能。
#what i get on KIBANA for a single line:
{
"_index": "logstash-2015.02.05",
"_type": "logs",
"_id": "mluzA57TnCpH-XBRbeg",
"_score": null,
"_source": {
"message": " - 2014-01-14 11:09:35,962 [main] INFO (api.batch.ThreadPoolWorker) user.country=US",
"@version": "1",
"@timestamp": "2015-02-05T09:38:21.310Z",
"path": "/root/test2.log",
"time": "2014-01-14 11:09:35,962",
"main": "main",
"loglevel": "INFO",
"class": "api.batch.ThreadPoolWorker",
"mydata": " user.country=US"
},
"sort": [
1423129101310,
1423129101310
]
}
#what i get for a multiline with Stack trace:
{
"_index": "logstash-2015.02.05",
"_type": "logs",
"_id": "9G6LsSO-aSpsas_jOw",
"_score": null,
"_source": {
"message": "\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20)",
"@version": "1",
"@timestamp": "2015-02-05T09:38:21.380Z",
"path": "/root/test2.log",
"tags": [
"_grokparsefailure"
]
},
"sort": [
1423129101380,
1423129101380
]
}
input {
file {
path => "/root/test2.log"
start_position => "beginning"
codec => multiline {
pattern => "^ - %{TIMESTAMP_ISO8601} "
negate => true
what => "previous"
}
}
}
filter {
grok {
match => [ "message", " -%{SPACE}%{SPACE}%{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] %{LOGLEVEL:loglevel}%{SPACE}%{SPACE}\(%{JAVACLASS:class}\) %{GREEDYDATA:mydata} %{JAVASTACKTRACEPART}"]
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
host => "194.3.227.23"
}
# stdout { codec => rubydebug}
}
任何人都可以告诉我我的配置文件出错了吗?谢谢。 这是我的日志文件的示例: - 2014-01-14 11:09:36,447 [main] INFO(support.context.ContextFactory)创建默认上下文 - 2014-01-14 11:09:38,623 [main] ERROR(support.context.ContextFactory)获取与数据库jdbc的连接时出错:oracle:thin:@ HAL9000:1521:DEVPRINT,带有用户cisuser和驱动程序oracle.jdbc.driver .OracleDriver java.sql.SQLException:ORA-28001:密码已过期 at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70) at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:131) **
* GT;编辑:这是我正在使用的最新配置
https://gist.github.com/anonymous/9afe80ad604f9a3d3c00#file-output-L1 *
**
答案 0 :(得分:5)
首先,在使用文件输入重复测试时,请务必使用 sincedb_path => “/ dev / null”一定要从文件的开头读取。
关于多行,您的问题内容或多线模式一定有问题,因为在聚合这些行时,没有任何事件具有由多行编解码器或过滤器添加的多行标记。 您的消息字段应包含由换行符 \ n 分隔的所有行(在我的情况下为\ r \ n在Windows上)。以下是输入配置的预期输出
{
"@timestamp" => "2015-02-10T11:03:33.298Z",
"message" => " - 2014-01-14 11:09:35,962 [main] INFO (api.batch.ThreadPoolWorker) user.country=US\r\n\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20\r",
"@version" => "1",
"tags" => [
[0] "multiline"
],
"host" => "localhost",
"path" => "/root/test.file"
}
关于grok,因为你想匹配多线字符串,你应该使用这样的模式。
filter {
grok {
match => {"message" => [
"(?m)^ -%{SPACE}%{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] % {LOGLEVEL:loglevel}%{SPACE}\(%{JAVACLASS:class}\) %{DATA:mydata}\n%{GREEDYDATA:stack}",
"^ -%{SPACE}%{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] %{LOGLEVEL:loglevel}%{SPACE}\(%{JAVACLASS:class}\) %{GREEDYDATA:mydata}"]
}
} }
(?m)前缀指示正则表达式引擎执行多行匹配。 然后你会得到像
这样的活动{
"@timestamp" => "2015-02-10T10:47:20.078Z",
"message" => " - 2014-01-14 11:09:35,962 [main] INFO (api.batch.ThreadPoolWorker) user.country=US\r\n\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20\r",
"@version" => "1",
"tags" => [
[0] "multiline"
],
"host" => "localhost",
"path" => "/root/test.file",
"time" => "2014-01-14 11:09:35,962",
"main" => "main",
"loglevel" => "INFO",
"class" => "api.batch.ThreadPoolWorker",
"mydata" => " user.country=US\r",
"stack" => "\tat oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:20\r"
}
您可以使用此在线工具http://grokconstructor.appspot.com/do/match
构建和验证多线模式最后警告,如果在路径设置中使用列表或通配符,则Logstash文件输入中存在多行编解码器的错误,该错误会混合来自多个文件的内容。唯一的工作方法是使用多行过滤器
HTH
编辑:我专注于多行字符串,你需要为非单行字符串添加类似的模式