Logstash Tomcat日志

时间:2015-11-03 14:36:24

标签: elasticsearch logstash kibana

我想解析tomcat日志,我们有肥皂/休息请求和响应。任何人都可以给我任何好的例子,我们可以解析这些日志并将其保存在json格式的弹性搜索中。

2 个答案:

答案 0 :(得分:0)

感谢艾伦的回复。这是我尝试使用Grok Pattern解析的示例。我是这个人的新手,所以试图弄清楚这个是否是正确的方法。

  

2015-09-28 10:50:30,249 {http-apr-8080-exec-4} INFO [org.apache.cxf.services.interfaceType] 1.0.0-LOCAL - 入站消息   ID:1   地址:http://localhost:8080/interface/interface?wsdl   编码:UTF-8   Http-Method:POST   Content-Type:text / xml;字符集= UTF-8   标题:{Accept = [ / ],cache-control = [no-cache],connection = [keep-alive],Content-Length = [2871],content-type = [text / xml ; charset = UTF-8],host = [localhost:8080],pragma = [no-cache],SOAPAction = [“http://services.localhost.com/calculate”],user-agent = [Apache CXF 2.7.5]}   有效载荷:users_19911111test123456false

答案 1 :(得分:0)

filter {
if [type] == "tomcatlog"{
   multiline {
       #type => "all" # no type means for all inputs
       pattern => "^%{TIMESTAMP_ISO8601}"
       negate => true
       what => "previous"
       }
   grok {
        match => {
        message => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}\{(?<thread>[^)]+)\}%{SPACE}%{LOGLEVEL:level}%{SPACE}\[(?<logger>[^\]]+)\]%{SPACE}%{SPACE}%{GREEDYDATA:message}"
        }
        }
        date {
            match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS" ]
            remove_field => [ "timestamp" ]
    }
   }
}

使用http://grokdebug.herokuapp.com/创建grok过滤器。列出了有用的模式patterns