使用logstash解析包含python回溯的日志

时间:2015-06-22 10:27:09

标签: python logstash logstash-grok logstash-configuration

我一直在尝试使用logstash解析我的python traceback日志。我的日志看起来像这样:

[pid: 26422|app: 0|req: 73/73] 192.168.1.1 () {34 vars in 592 bytes} [Wed Feb 18 13:35:55 2015] GET /data => generated 2538923 bytes in 4078 msecs (HTTP/1.1 200) 2 headers in 85 bytes (1 switches on core 0)
Traceback (most recent call last):
  File "/var/www/analytics/parser.py", line 257, in parselogfile
    parselogline(basedir, lne)
  File "/var/www/analytics/parser.py", line 157, in parselogline
    pval = understandpost(parts[3])
  File "/var/www/analytics/parser.py", line 98, in understandpost
    val = json.loads(dct["events"])
  File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
    obj, end = self.scan_once(s, idx)
ValueError: Unterminated string starting at: line 1 column 355 (char 354)

到目前为止,我已经能够解析除最后一行之外的日志,即

ValueError: Unterminated string starting at: line 1 column 355 (char 354)

我使用多线过滤器这样做。我的logstash配置如下所示:

filter {

    multiline {
        pattern => "^Traceback"
        what => "previous"
    }

    multiline {
        pattern => "^ "
        what => "previous"
    }


    grok {
        match => [
            "message", "\[pid\: %{NUMBER:process_id:int}\|app: 0\|req: %{NUMBER}/%{NUMBER}\] %{IPORHOST:clientip} \(\) \{%{NUMBER:vars:int} vars in %{NUMBER:bytes:int} bytes\} \[%{GREEDYDATA:timestamp}\] %{WORD:method} /%{GREEDYDATA:referrer} \=\> generated %{NUMBER:generated_bytes:int} bytes in %{NUMBER} msecs \(HTTP/%{NUMBER} %{NUMBER:status_code:int}\) %{NUMBER:headers:int} headers in %{NUMBER:header_bytes:int} bytes \(%{NUMBER:switches:int} switches on core %{NUMBER:core:int}\)%{GREEDYDATA:traceback}"
            ]
    }

    if "_grokparsefailure" in [tags] {
        multiline {
            pattern => "^.*$"
            what => "previous"
            negate => "true"
        }
    }

    if "_grokparsefailure" in [tags] {
        grok {
            match => [
                  "message", "\[pid\: %{NUMBER:process_id:int}\|app: 0\|req: %{NUMBER}/%{NUMBER}\] %{IPORHOST:clientip} \(\) \{%{NUMBER:vars:int} vars in %{NUMBER:bytes:int} bytes\} \[%{GREEDYDATA:timestamp}\] %{WORD:method} /%{GREEDYDATA:referrer} \=\> generated %{NUMBER:generated_bytes:int} bytes in %{NUMBER} msecs \(HTTP/%{NUMBER} %{NUMBER:status_code:int}\) %{NUMBER:headers:int} headers in %{NUMBER:header_bytes:int} bytes \(%{NUMBER:switches:int} switches on core %{NUMBER:core:int}\)%{GREEDYDATA:traceback}"
        ]
            remove_tag => ["_grokparsefailure"]
        }
    }
}

但是我的最后一行是没有解析的。相反,它仍然给我一个错误,并且还以指数方式增加了处理时间。关于如何解析回溯的最后一行的任何建议?

1 个答案:

答案 0 :(得分:7)

好吧,我找到了解决方案。所以我遵循的方法是忽略以' ['开头的日志消息的开始,并且所有其他行将被附加在前一个消息的末尾。然后可以应用grok过滤器并且可以解析回溯。请注意,我必须应用两个grok过滤器:

  1. 当有一个带有GREEDYDATA的追溯来获得追溯时。

  2. 对于没有回溯的情况,GREEDYDATA解析失败,我必须删除_grokparsefailure标记,然后再次应用没有GREEDYDATA的grok。这是在if block的帮助下完成的。

  3. 最终的logstash过滤器如下所示:

    filter {
    
        multiline {
            pattern => "^[^\[]"
            what => "previous"
        }
    
    
    
        grok {
            match => [
                "message", "\[pid\: %{NUMBER:process_id:int}\|app: 0\|req: %{NUMBER}/%{NUMBER}\] %{IPORHOST:clientip} \(\) \{%{NUMBER:vars:int} vars in %{NUMBER:bytes:int} bytes\} \[%{GREEDYDATA:timestamp}\] %{WORD:method} /%{GREEDYDATA:referrer} \=\> generated %{NUMBER:generated_bytes:int} bytes in %{NUMBER} msecs \(HTTP/%{NUMBER} %{NUMBER:status_code:int}\) %{NUMBER:headers:int} headers in %{NUMBER:header_bytes:int} bytes \(%{NUMBER:switches:int} switches on core %{NUMBER:core:int}\)%{GREEDYDATA:traceback}"
            ]
        }
    
        if "_grokparsefailure" in [tags] {
            grok {
                match => [
                "message", "\[pid\: %{NUMBER:process_id:int}\|app: 0\|req: %{NUMBER}/%{NUMBER}\] %{IPORHOST:clientip} \(\) \{%{NUMBER:vars:int} vars in %{NUMBER:bytes:int} bytes\} \[%{GREEDYDATA:timestamp}\] %{WORD:method} /%{GREEDYDATA:referrer} \=\> generated %{NUMBER:generated_bytes:int} bytes in %{NUMBER} msecs \(HTTP/%{NUMBER} %{NUMBER:status_code:int}\) %{NUMBER:headers:int} headers in %{NUMBER:header_bytes:int} bytes \(%{NUMBER:switches:int} switches on core %{NUMBER:core:int}\)"
                    ]
                remove_tag => ["_grokparsefailure"]
            }
        }
    
        else {
            mutate {
                convert => {"traceback" => "string"}
            }
        }
    
        date {
            match => ["timestamp", "dd/MM/YYYY:HH:MM:ss Z"]
            locale => en
        }
        geoip {
            source => "clientip"
        }
        useragent {
            source => "agent"
            target => "Useragent"
        }
    }
    

    或者,如果您不想使用if块检查另一个grok模式并删除_grokparsefailure,则可以使用第一个grok过滤器通过包含多个消息来检查这两种消息类型-pattern检查grok过滤器的match数组。可以这样做:

            grok {
                match => [
                "message", "\[pid\: %{NUMBER:process_id:int}\|app: 0\|req: %{NUMBER}/%{NUMBER}\] %{IPORHOST:clientip} \(\) \{%{NUMBER:vars:int} vars in %{NUMBER:bytes:int} bytes\} \[%{GREEDYDATA:timestamp}\] %{WORD:method} /%{GREEDYDATA:referrer} \=\> generated %{NUMBER:generated_bytes:int} bytes in %{NUMBER} msecs \(HTTP/%{NUMBER} %{NUMBER:status_code:int}\) %{NUMBER:headers:int} headers in %{NUMBER:header_bytes:int} bytes \(%{NUMBER:switches:int} switches on core %{NUMBER:core:int}\)",
                "message", "\[pid\: %{NUMBER:process_id:int}\|app: 0\|req: %{NUMBER}/%{NUMBER}\] %{IPORHOST:clientip} \(\) \{%{NUMBER:vars:int} vars in %{NUMBER:bytes:int} bytes\} \[%{GREEDYDATA:timestamp}\] %{WORD:method} /%{GREEDYDATA:referrer} \=\> generated %{NUMBER:generated_bytes:int} bytes in %{NUMBER} msecs \(HTTP/%{NUMBER} %{NUMBER:status_code:int}\) %{NUMBER:headers:int} headers in %{NUMBER:header_bytes:int} bytes \(%{NUMBER:switches:int} switches on core %{NUMBER:core:int}\)%{GREEDYDATA:traceback}"
                    ]
            }
    

    还有第三种方法(可能是最优雅的方法)。它看起来像这样:

    grok {
        match => [
            "message", "\[pid\: %{NUMBER:process_id:int}\|app: 0\|req: %{NUMBER}/%{NUMBER}\] %{IPORHOST:clientip} \(\) \{%{NUMBER:vars:int} vars in %{NUMBER:bytes:int} bytes\} \[%{GREEDYDATA:timestamp}\] %{WORD:method} /%{GREEDYDATA:referrer} \=\> generated %{NUMBER:generated_bytes:int} bytes in %{NUMBER} msecs \(HTTP/%{NUMBER} %{NUMBER:status_code:int}\) %{NUMBER:headers:int} headers in %{NUMBER:header_bytes:int} bytes \(%{NUMBER:switches:int} switches on core %{NUMBER:core:int}\)(%{GREEDYDATA:traceback})?"
        ]
    }
    

    请注意,在此方法中,存在为可选字段必须包含在"()?"中。在这里,(%{GREEDYDATA:traceback})?

    因此,grok过滤器会看到如果该字段可用,它将被解析。否则,它将被跳过。