我正在尝试将堆栈跟踪记录到Logstash中。
日志记录堆栈是ELK(ElasticSearch,Logstash,Kibana)。
应用程序生成日志是一个Java应用程序,使用slf4j
作为日志记录接口,log4j2
作为日志记录实现。
log4j2.xml
使用syslog
格式声明此RFC5424
Appender:
<Appenders>
<Syslog name="RFC5424" format="RFC5424" host="localhost" port="8514"
protocol="TCP" appName="MyApp" includeMDC="true" mdcId="mdc"
facility="LOCAL0" enterpriseNumber="18060" newLine="true"
messageId="Audit" id="App">
<LoggerFields>
<KeyValuePair key="thread" value="%t"/>
<KeyValuePair key="priority" value="%p"/>
<KeyValuePair key="category" value="%c"/>
<KeyValuePair key="exception" value="%ex{full}"/>
</LoggerFields>
</Syslog>
</Appenders>
我从Java应用程序中记录了Throwable,如下所示:
org.slf4j.LoggerFactory.getLogger("exception_test").error("Testing errors", new RuntimeException("Exception message"));
当记录异常时,Logstash会跟踪此类内容以显示它持续存在的内容:
{
"@timestamp":"2016-11-08T11:08:10.387Z",
"port":60397,
"@version":"1",
"host":"127.0.0.1",
"message":"<131>1 2016-11-08T11:08:10.386Z MyComputer.local MyApp - Audit [mdc@18060 category=\"exception_test\" exception=\"java.lang.RuntimeException: Exception message",
"type":"syslog",
"tags":[
"_grokparsefailure"
]
}
我确认Kibana在其中一个日志条目的_source
字段中显示完全相同的JSON。
这里有一个问题:没有保存堆栈跟踪。并且消息“测试错误”丢失了。
"tags":["_grokparsefailure"]
很不幸,但与此问题无关。
我尝试添加<ExceptionPattern/>
以查看它是否会改变任何内容:
<Syslog name="RFC5424" format="RFC5424" host="localhost" port="8514"
protocol="TCP" appName="MyApp" includeMDC="true" mdcId="mdc"
facility="LOCAL0" enterpriseNumber="18060" newLine="true"
messageId="Audit" id="App">
<LoggerFields>
<KeyValuePair key="thread" value="%t"/>
<KeyValuePair key="priority" value="%p"/>
<KeyValuePair key="category" value="%c"/>
<KeyValuePair key="exception" value="%ex{full}"/>
</LoggerFields>
<ExceptionPattern>%ex{full}</ExceptionPattern>
</Syslog>
<ExceptionPattern/>
替换了日志消息,而也(遗憾地)省略了所有loggerFields
。但它确实给了我一个班级名称和行号:
{
"@timestamp":"2016-11-08T11:54:03.835Z",
"port":60397,
"@version":"1",
"host":"127.0.0.1",
"message":"at com.stackoverflow.LogTest.throw(LogTest.java:149)",
"type":"syslog",
"tags":[
"_grokparsefailure"
]
}
再次:没有堆栈跟踪。而且:消息“测试错误”丢失了。
如何使用log4j2
将堆栈跟踪记录到Logstash中?我不一定要使用syslog
appender。
基本上,约束是:
答案 0 :(得分:1)
Log4j 2.5的SyslogAppender 只能通过UDP 发送堆栈跟踪。
<Syslog name="RFC5424" format="RFC5424" host="localhost" port="8514"
protocol="UDP" appName="MyApp" includeMDC="true" mdcId="mdc"
facility="LOCAL0" enterpriseNumber="18060" newLine="true"
messageId="LogTest" id="App">
<LoggerFields>
<KeyValuePair key="thread" value="%t"/>
<KeyValuePair key="priority" value="%p"/>
<KeyValuePair key="category" value="%c"/>
<KeyValuePair key="exception" value="%ex{full}"/>
</LoggerFields>
<ExceptionPattern>%ex{full}</ExceptionPattern>
</Syslog>
使用UDP:ExceptionPattern
和 LoggerFields.KeyValuePair["exception"]
开始作为多行堆栈跟踪的解决方案。
当我通过syslog通过 UDP 发送异常时,这是logstash
打印的内容:
{
"@timestamp" => 2016-11-14T13:23:38.304Z,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "<131>1 2016-11-14T13:23:38.302Z BirchBox.local MyApp - LogTest [mdc@18060 category=\"com.stackoverflow.Deeply\" exception=\"java.lang.RuntimeException: Exception message\n\tat com.stackoverflow.Deeply.complain(Deeply.java:10)\n\tat com.stackoverflow.Nested.complain(Nested.java:8)\n\tat com.stackoverflow.Main.main(Main.java:20)\n\" priority=\"ERROR\" thread=\"main\"] Example error\njava.lang.RuntimeException: Exception message\n\tat com.stackoverflow.Deeply.complain(Deeply.java:10)\n\tat com.stackoverflow.Nested.complain(Nested.java:8)\n\tat com.stackoverflow.Main.main(Main.java:20)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
在[mdc@18060 exception=\"…\"]
内,我们获得 LoggerFields.KeyValuePair["exception"]
堆栈跟踪。
除此之外:由于ExceptionPattern
,堆栈跟踪将插入到已记录的消息本身中。
供参考:当我通过syslog通过 TCP 发送异常时,这是logstash
打印的内容(即与上述相同的SyslogAppender,但改为使用protocol="TCP"
):
{
"@timestamp" => 2016-11-14T19:56:30.293Z,
"port" => 63179,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "<131>1 2016-11-14T19:56:30.277Z BirchBox.local MyApp - Audit [mdc@18060 category=\"com.stackoverflow.Deeply\" exception=\"java.lang.RuntimeException: Exception message",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"@timestamp" => 2016-11-14T19:56:30.296Z,
"port" => 63179,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "at com.stackoverflow.Deeply.complain(Deeply.java:10)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"@timestamp" => 2016-11-14T19:56:30.296Z,
"port" => 63179,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "at com.stackoverflow.Nested.complain(Nested.java:8)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"@timestamp" => 2016-11-14T19:56:30.296Z,
"port" => 63179,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "at com.stackoverflow.Main.main(Main.java:20)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"@timestamp" => 2016-11-14T19:56:30.296Z,
"port" => 63179,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "\" priority=\"ERROR\" thread=\"main\"] Example error",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"@timestamp" => 2016-11-14T19:56:30.296Z,
"port" => 63179,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "java.lang.RuntimeException: Exception message",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"@timestamp" => 2016-11-14T19:56:30.297Z,
"port" => 63179,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "at com.stackoverflow.Deeply.complain(Deeply.java:10)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"@timestamp" => 2016-11-14T19:56:30.298Z,
"port" => 63179,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "at com.stackoverflow.Nested.complain(Nested.java:8)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"@timestamp" => 2016-11-14T19:56:30.298Z,
"port" => 63179,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "at com.stackoverflow.Main.main(Main.java:20)",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
{
"@timestamp" => 2016-11-14T19:56:30.299Z,
"port" => 63179,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
看起来TCP实际上“工作”,但将单个日志消息拆分为许多系统日志消息(例如遇到\n
时)。