如何使logstash 2.3.2配置文件更加灵活

时间:2016-07-19 11:19:07

标签: logging logstash monitoring logstash-grok logstash-configuration

我使用logstash 2.3.2来读取和解析wso2 esb的日志文件。我能够成功解析日志条目并将它们发送到Json格式的API。

在日志文件中有不同的日志级别,例如“INFO,ERROR,WARN和DEBUG”。目前,我只发送一个日志条目,如果它的logtype是Error。

示例日志文件:

TID: [-1234] [] [2016-05-26 11:22:34,366]  INFO {org.wso2.carbon.application.deployer.internal.ApplicationManager} -  Undeploying Carbon Application : CustomerService_CA_01_001_1.0.0... {org.wso2.carbon.application.deployer.internal.ApplicationManager}
TID: [-1234] [] [2016-05-26 11:22:35,539]  INFO {org.apache.axis2.transport.jms.ServiceTaskManager} -  Task manager for service : CustomerService_01_001 shutdown {org.apache.axis2.transport.jms.ServiceTaskManager}
TID: [-1234] [] [2016-05-26 11:22:35,545]  INFO {org.apache.axis2.transport.jms.JMSListener} -  Stopped listening for JMS messages to service : CustomerService_01_001 {org.apache.axis2.transport.jms.JMSListener}
TID: [-1234] [] [2016-05-26 11:22:35,549]  INFO {org.apache.synapse.core.axis2.ProxyService} -  Stopped the proxy service : CustomerService_01_001 {org.apache.synapse.core.axis2.ProxyService}
TID: [-1234] [] [2016-05-26 11:22:35,553]  INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} -  Removing Axis2 Service: CustomerService_01_001 {super-tenant} {org.wso2.carbon.core.deployment.DeploymentInterceptor}
TID: [-1234] [] [2016-05-26 11:22:35,572]  INFO {org.apache.synapse.deployers.ProxyServiceDeployer} -  ProxyService named 'CustomerService_01_001' has been undeployed {org.apache.synapse.deployers.ProxyServiceDeployer}
TID: [-1234] [] [2016-05-26 18:10:26,465]  INFO {org.apache.synapse.mediators.builtin.LogMediator} -  To: LogaftervalidationWSAction: urn:mediateLogaftervalidationSOAPAction: urn:mediateLogaftervalidationMessageID: urn:uuid:f89e4244-7a95-46ff-9df2-3e296009bf8bLogaftervalidationDirection: response {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1234] [] [2016-05-26 18:10:26,469]  INFO {org.apache.synapse.mediators.builtin.LogMediator} -  To: XPATH-LogLastNameWSAction: urn:mediateXPATH-LogLastNameSOAPAction: urn:mediateXPATH-LogLastNameMessageID: urn:uuid:f89e4244-7a95-46ff-9df2-3e296009bf8bXPATH-LogLastNameDirection: responseXPATH-LogLastNameproperty_name LastName_Value = XPATH-LogLastNameEnvelope:
TID: [-1234] [] [2016-05-26 18:10:26,477] ERROR {org.apache.synapse.mediators.transform.XSLTMediator} -  The evaluation of the XPath expression //tns1:Customer did not result in an OMNode : null {org.apache.synapse.mediators.transform.XSLTMediator}
TID: [-1234] [] [2016-05-26 18:10:26,478] ERROR {org.apache.synapse.mediators.transform.XSLTMediator} -  Unable to perform XSLT transformation using : Value {name ='null', keyValue ='gov:CustomerService/01/xslt/CustomertoCustomerSchemaMapping.xslt'} against source XPath : //tns1:Customer reason : The evaluation of the XPath expression //tns1:Customer did not result in an OMNode : null {org.apache.synapse.mediators.transform.XSLTMediator}
org.apache.synapse.SynapseException: The evaluation of the XPath expression //tns1:Customer did not result in an OMNode : null
    at org.apache.synapse.util.xpath.SourceXPathSupport.selectOMNode(SourceXPathSupport.java:100)
    at org.apache.synapse.mediators.transform.XSLTMediator.performXSLT(XSLTMediator.java:216)
    at org.apache.synapse.mediators.transform.XSLTMediator.mediate(XSLTMediator.java:196)
    at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:81)
    at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:48)
    at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:149)
    at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:214)
    at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:81)
    at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:48)
    at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:149)
    at org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:185)
    at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
    at org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:395)
    at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:142)
    at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)
TID: [-1234] [] [2016-05-26 18:10:26,500]  INFO {org.apache.synapse.mediators.builtin.LogMediator} -  To: , WSAction: urn:mediate, SOAPAction: urn:mediate, MessageID: urn:uuid:f89e4244-7a95-46ff-9df2-3e296009bf8b, Direction: response {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1234] [] [2016-05-26 11:32:24,272]  WARN {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter} -  The running OS : Windows 8 is not a tested Operating System for running WSO2 Carbon {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter}
TID: [-1234] [] [2016-05-26 11:32:24,284]  WARN {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter} -  Carbon is configured to use the default keystore (wso2carbon.jks). To maximize security when deploying to a production environment, configure a new keystore with a unique password in the production server profile. {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter}
TID: [-1] [] [2016-05-26 11:32:24,315]  INFO {org.wso2.carbon.databridge.agent.thrift.AgentHolder} -  Agent created ! {org.wso2.carbon.databridge.agent.thrift.AgentHolder}

配置文件:

input {
 stdin {}
    file {
       path => "C:\MyDocument\Project\SampleESBLogs\wso2carbon.log" 
        type => "wso2carbon"
        start_position => "beginning"
        codec => multiline {
                pattern => "(^\s*at .+)|^(?!TID).*$"
                negate => false
                what => "previous"
        }

    }
}
filter {

    if [type] == "wso2carbon" {
        grok {
            match => [ "message", "TID:%{SPACE}\[%{INT:log_SourceSystemId}\]%{SPACE}\[%{DATA:log_ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:log_MessageType}%{SPACE}{%{JAVACLASS:log_MessageTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:log_Message}" ]
            add_tag => [ "grokked" ]        
        }

        if "grokked" in [tags] {
            grok {
                match => ["log_MessageType", "ERROR"]
                add_tag => [ "loglevelerror" ]
            }   
        }

        if !( "_grokparsefailure" in [tags] ) {
            grok{
                    match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
                    add_tag => [ "grokked" ]    
                }
            date {
                    match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
                    target => "TimeStamp"
                    timezone => "UTC"
                }
        }               
    }
}

    if ( "multiline" in [tags] ) {
        grok {
            match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
            add_tag => [ "multiline" ]
            tag_on_failure => [ "multiline" ]       
        }
        date {
                match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
                target => "TimeStamp"

        }       
    }

}

output {
       if [type] == "wso2carbon" {  
        if "loglevelerror" in [tags] {
            stdout { }
            http {
                url => "https://localhost:8086/messages"
                http_method => "post"
                format => "json"
                mapping => ["TimeStamp","%{TimeStamp}","MessageType","%{log_MessageType}","MessageTitle","%{log_MessageTitle}","Message","%{log_Message}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
            }
        }
    }
}

问题陈述:

我想为用户提供一个灵活的选项,用户可以从哪里决定需要向API发送哪种类型的日志条目?与现有设置中一样,只有“ERROR”类型的日志条目正在向API发送。

目前我的表现如何:

目前我正在通过以下方式进行此操作。我首先检查我的过滤器,如果一个重新分析的日志条目有错误类型,那么添加一个标记到该日志条目。

if "grokked" in [tags] {
            grok {
                match => ["log_MessageType", "ERROR"]
                add_tag => [ "loglevelerror" ]
            }   
        } 

在输出部分,我正在检查“if”条件,如果解析的条目具有必需的标记,那么让它去掉,否则放弃它或忽略它。

if "loglevelerror" in [tags] {
            stdout { }
            http {
             ....
          }
      }

现在,我也想检查其他日志级别,那么还有其他更好的方法吗?或者我必须在其中放置类似的具有相同sttuff的块,只是条件会有所不同。

总结一下: 如果我想通过使用我的配置向某人提供选项,他们可以通过取消注释或以任何其他方式选择他们想要发送给API的日志条目类型(INFO,WARN,ERROR,DEBUG),我该如何实现?

1 个答案:

答案 0 :(得分:1)

您可以跳过该grok并在输出处使用条件检查。您可以检查字段的值是否在数组中或与值匹配。

Logstash Conditional Reference

检查是否只是水平错误

if [log_MessageType] == "ERROR" {
  # outputs
}

发送ERROR和WARN

if [log_MessageType] in ["ERROR", "WARN"] {
  # outputs
}

但是请注意不要做类似

的事情
if [log_MessageType] in ["ERROR"] {

这不会按预期执行,有关详细信息,请参阅this question