使用ANTLR解析日志文件

时间:2012-09-13 16:10:41

标签: java parsing logging antlr antlr3

我需要使用ANTLR解析Weblogic日志文件。这是一个例子:

Tue Aug 28 09:39:09 MSD 2012 [test] [[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] Alert - There is no user password credential mapper provider configured in your security realm. Oracle Service Bus service account management will be disabled. Configure a user password credential mapper provider if you need OSB service account support.

Sun Sep 02 23:13:00 MSD 2012 [test] [[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'] Warning - Timer (Checkpoint) has been triggered with a tick (205 873) that is less than or equal to the last tick that was received (205 873). This could happen in a cluster due to clock synchronization with the timer authority. The current trigger will be ignored, and operation will be skipped.
Mon Sep 03 10:35:54 MSD 2012 [test] [[ACTIVE] ExecuteThread: '19' for queue: 'weblogic.kernel.Default (self-tuning)'] Info - 
 [OSB Tracing] Inbound request was received. 

 Service Ref = Some/URL
 URI = Another/URL
 Message ID = u-u-i-d
 Request metadata =
    <xml-fragment>
      <tran:headers xsi:type="http:HttpRequestHeaders" xmlns:http="http://www.bea.com/wli/sb/transports/http" xmlns:tran="http://www.bea.com/wli/sb/transports" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
        <http:Accept-Encoding>gzip, deflate,gzip, deflate</http:Accept-Encoding>
        <http:Connection>Keep-Alive</http:Connection>
        <http:Content-Length>666</http:Content-Length>
        <http:Content-Type>text/xml; charset=utf-8</http:Content-Type>
        <http:Host>some.host.name</http:Host>
        <http:SOAPAction>""</http:SOAPAction>
      </tran:headers>
      <tran:encoding xmlns:tran="http://www.bea.com/wli/sb/transports">utf-8</tran:encoding>
      <http:client-host xmlns:http="http://www.bea.com/wli/sb/transports/http">1.2.3.4</http:client-host>
      <http:client-address xmlns:http="http://www.bea.com/wli/sb/transports/http">1.2.3.4</http:client-address>
      <http:http-method xmlns:http="http://www.bea.com/wli/sb/transports/http">POST</http:http-method>
    </xml-fragment>
 Payload =  
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"><XMLHere/></s:Envelope>

我对日志的这一部分感兴趣,必须忽略其他所有内容(应该解析Date,Service Ref值和Envelope XML):

Sun Sep 02 23:13:00 MSD 2012 [test] [[ACTIVE] ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'] Warning - Timer (Checkpoint) has been triggered with a tick (205 873) that is less than or equal to the last tick that was received (205 873). This could happen in a cluster due to clock synchronization with the timer authority. The current trigger will be ignored, and operation will be skipped.
    Mon Sep 03 10:35:54 MSD 2012 [test] [[ACTIVE] ExecuteThread: '19' for queue: 'weblogic.kernel.Default (self-tuning)'] Info - 
     [OSB Tracing] Inbound request was received. 

     Service Ref = Some/URL
     URI = Another/URL
     Message ID = u-u-i-d
     Request metadata =
        <xml-fragment>
          <tran:headers xsi:type="http:HttpRequestHeaders" xmlns:http="http://www.bea.com/wli/sb/transports/http" xmlns:tran="http://www.bea.com/wli/sb/transports" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
            <http:Accept-Encoding>gzip, deflate,gzip, deflate</http:Accept-Encoding>
            <http:Connection>Keep-Alive</http:Connection>
            <http:Content-Length>666</http:Content-Length>
            <http:Content-Type>text/xml; charset=utf-8</http:Content-Type>
            <http:Host>some.host.name</http:Host>
            <http:SOAPAction>""</http:SOAPAction>
          </tran:headers>
          <tran:encoding xmlns:tran="http://www.bea.com/wli/sb/transports">utf-8</tran:encoding>
          <http:client-host xmlns:http="http://www.bea.com/wli/sb/transports/http">1.2.3.4</http:client-host>
          <http:client-address xmlns:http="http://www.bea.com/wli/sb/transports/http">1.2.3.4</http:client-address>
          <http:http-method xmlns:http="http://www.bea.com/wli/sb/transports/http">POST</http:http-method>
        </xml-fragment>
     Payload =  
    <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"><XMLHere/></s:Envelope>

这是我的词法分析员:

lexer grammar LogLexer;

options {filter=true;}

 /*------------------------------------------------------------------
 * LEXER RULES
 *------------------------------------------------------------------*/
LOGDATE : DAY ' ' MONTH ' ' NUMDAY ' ' NUMTIME ' ' TIMEZONE ' ' NUMYEAR;

METAINFO : '[' .* ']' ' [[' .* ']' .* ']' .* '-' .* '[OSB Tracing] Inbound request was received.';

SERVICE_REF : 'Service Ref = ';

URI : (SYMBOL | '/')+;

ENVELOPE_TAG : '<' ENVELOPE_TAGNAME .* '>' .* '</' ENVELOPE_TAGNAME '>';

fragment
ENVELOPE_TAGNAME : SYMBOL+ ':Envelope';

fragment
NUMTIME : NUM NUM ':' NUM NUM ':' NUM NUM;

fragment
TIMEZONE : SYMBOL SYMBOL SYMBOL;

fragment
DAY : 'Sun' | 'Mon' | 'Tue' | 'Wed' | 'Fri' | 'Sat';

fragment
MONTH :  'Sep' | 'Oct' | 'Nov' | 'Dec' | 'Feb' | 'Mar' | 'May' | 'Apr' | 'Jun' | 'Jul' | 'Aug';

fragment
NUMYEAR : NUM NUM NUM NUM;

fragment
NUMDAY : NUM NUM;

fragment
NUM : '0'..'9';

fragment
SYMBOL : ('a'..'z' | 'A'..'Z');

这是解析器(尚未完成):

grammar LogParser;

options {
tokenVocab = OSBLogLexer;
}

@header {
    import java.util.List;
    import java.util.ArrayList;
}

parse 
    returns [List<List<String>> entries] 
    @init {
        $entries = new ArrayList<List<String>>();
    }
    : requestLogEntry+
    {
        $entries.add($requestLogEntry.logEntry);
    };

requestLogEntry 
    returns [List<String> logEntry]
    @init {
        $logEntry = new ArrayList<String>();
    }
    : LOGDATE METAINFO .* serviceRef .* ENVELOPE_TAG
    {
        $logEntry.add($LOGDATE.getText());
        $logEntry.add($serviceRef.serviceURI);
        $logEntry.add($ENVELOPE_TAG.getText());
    };

serviceRef 
    returns [String serviceURI] 
    : SERVICE_REF URI 
    {
        $serviceURI = $URI.getText();
    };

问题是它不正确地解析日志。我的代码不会忽略不需要的记录,因此我在结果列表中得到无效的DATE值:Tue Aug 28 09:39:09 MSD 2012(示例中的第一个)而不是Mon Sep 03 10:35:54 MSD 2012(纠正一个)。有谁可以帮助我?

提前感谢您的回答。

更新

我已经更新了我的代码,但是我遇到了代码错误。看不出有什么问题。

更新了词法分析器:

lexer grammar LogLexer;

options {
    filter=true;
}

TRASH : LOGDATE ' ' METAINFO (' ' | '\n')* { skip(); };

LOGDATE : DAY ' ' MONTH ' ' NUMDAY ' ' NUMTIME ' ' TIMEZONE ' ' NUMYEAR;

METAINFO : ('[' | ']' | SYMBOL | NUM | ' ' | SPECIAL)+;

OSB_METAINFO : (' ' | '\n')* '[OSB Tracing] Inbound request was received.';

SERVICE_REF : 'Service Ref = ';

URI : (SYMBOL | '/')+;

ENVELOPE_TAG : '<' ENVELOPE_TAGNAME .* '>' .* '</' ENVELOPE_TAGNAME '>';

fragment
OSB_TRACING : '[OSB Tracing] Inbound request was received.';

fragment
ENVELOPE_TAGNAME : SYMBOL+ ':Envelope';

fragment
NUMTIME : NUM NUM ':' NUM NUM ':' NUM NUM;

fragment
TIMEZONE : SYMBOL SYMBOL SYMBOL;

fragment
DAY : 'Sun' | 'Mon' | 'Tue' | 'Wed' | 'Fri' | 'Sat';

fragment
MONTH :  'Sep' | 'Oct' | 'Nov' | 'Dec' | 'Feb' | 'Mar' | 'May' | 'Apr' | 'Jun' | 'Jul' | 'Aug';

fragment
NUMYEAR : NUM NUM NUM NUM;

fragment
NUMDAY : NUM NUM;

fragment
NUM : '0'..'9';

fragment
SYMBOL : ('a'..'z' | 'A'..'Z');

fragment
SPECIAL : ( ~'\n' | '\'' | '.' | '(' | ')' | '-');

更新了解析器:

parser grammar LogParser;

options {
    tokenVocab = LogLexer;
}

@header {
    import java.util.List;
    import java.util.ArrayList;
}

parse returns [List<List<String>> entries] 
    @init {
        $entries = new ArrayList<List<String>>();
    }
    : requestLogEntry+
    {
        $entries.add($requestLogEntry.logEntry);
    };

requestLogEntry 
    returns [List<String> logEntry]
    @init {
        $logEntry = new ArrayList<String>();
    }
    :  LOGDATE ' ' METAINFO OSB_METAINFO .* serviceRef .* ENVELOPE_TAG
    {
        $logEntry.add($LOGDATE.getText());
        $logEntry.add($serviceRef.serviceURI);
        $logEntry.add($ENVELOPE_TAG.getText());
    };

serviceRef 
    returns [String serviceURI] 
    : SERVICE_REF URI 
    {
        $serviceURI = $URI.getText();
    };

Lexer生成错误:

[14:18:12] error(204): LogLexer.g:56:21: duplicate token type '\'' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:28: duplicate token type '.' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:34: duplicate token type '(' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:40: duplicate token type ')' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:46: duplicate token type '-' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:21: duplicate token type '\'' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:28: duplicate token type '.' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:34: duplicate token type '(' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:40: duplicate token type ')' when collapsing subrule into set
[14:18:12] error(204): LogLexer.g:56:46: duplicate token type '-' when collapsing subrule into set

这些错误似乎随机发生并随机消失(文件重命名)。此外,ANTLR从我的解析器文件生成另一个词法分析器(这也是随机发生的)。我在Windows 7(x64)上使用最后一个可用的ANTLR3和ANTLRWorks。

3 个答案:

答案 0 :(得分:3)

  

这些错误似乎随机发生并随机消失(文件重命名)。

不,它们不是随机发生的。错误源于规则:

fragment
SPECIAL : ( ~'\n' | '\'' | '.' | '(' | ')' | '-');

~'\n'已匹配'\'' | '.' | '(' | ')' | '-'。你可能意味着:

fragment
SPECIAL : ~('\n' | '\'' | '.' | '(' | ')' | '-');
  

此外,ANTLR从我的解析器文件生成另一个词法分析器(这也是随机发生的)。我在Windows 7(x64)上使用最后一个可用的ANTLR3和ANTLRWorks。

只有在未指定语法类型时才会发生这种情况。例如:grammar T(所谓的组合语法)生成词法分析器和解析器,其中parser grammar Tlexer grammar T分别仅生成解析器和词法分析器。我看到你最初发布了一个组合语法。 “额外”词法分析者类可能是你的语法组合时的残余。

另外,请确保 在解析器语法中使用任何文字标记! (从' '规则中删除requestLogEntry。)

答案 1 :(得分:1)

我不完全确定我正在跟踪什么是有效服务请求,哪些无效服务请求,但无论如何我都会继续前进。

您的解析器正在寻找

LOGDATE METAINFO .* serviceRef .* ENVELOPE_TAG

在开始解析之前,词法分析器正在寻找LOGDATE + METAINFO +一些东西+ serviceRef。

词法分析器不知道您希望它丢弃没有serviceRef的前两个LOGDATE,只考虑具有serviceRef的第三个条目。因此,它会将第一行解析为完整条目的开头。

如果没有给你答案并且让你从深刻的理解中获得快乐,我建议你让你的词法分析者“更好地理解”如何建立正确的条目。词法分析者还应该了解如何构建错误的条目。

换句话说,你如何重写词法分析器以便它处理一些词位并说前两行是“只是一个日期”而第三行是真正的交易?

答案 2 :(得分:0)

kernel_mode!

我不确定,您选择解析日志文件的方式是最合适的。

在我看来,Antlr旨在描述无上下文语法。 在你的情况下,我们处理最简单的常规语法。

根据我的经验,我敢说明一点,对于解析日志文件,逐行阅读器是更简单和最优的方法。这样的算法就像那样:

  1. 读行。转到p。 2.1。 2.1如果行以正则表达式开头(用于解析“Mon Sep 03 10:35:54 MSD 2012”) - 解析它并保存结果以进一步解析。转到p。 1。 2.2如果行与上面提到的regexp不匹配,则转到p。 3。
  2. 读取行,直到从第2.1页的reg.exp开始。转到p。 4。
  3. 尝试从读取块中切割XML(使用简单的字符串函数)。转到p。 5。
  4. 包含XML? =&GT;解析XML +使用以前保存的解析结果作为日志日期。 然后转到第1页。