Logstash过滤器与Linux上的条件不匹配

时间:2018-12-07 13:27:35

标签: elasticsearch logstash kibana logstash-grok filebeat

我在logstash中有两个输入源: 一个是来自Java应用程序的日志,它是json格式的

源消息(json):

  {
    "@timestamp": "2018-12-07T10:15:19.244Z",
    "offset": 312710,
    "line_number": "-1",
    "thread_name": "default task-65",
    "file": "<unknown>",
    "ndc": "",
    "message": "{\"source_host\":\"localhost\",\"method\":\"<unknown>\",\"level\":\"WARN\",\"ndc\":\"\",\"mdc\":{},\"@timestamp\":\"2018-12-07T10:15:16.888Z\",\"file\":\"<unknown>\",\"line_number\":\"-1\",\"thread_name\":\"default task-65\",\"@version\":1,\"log_message\":\"REQUESTED URI \\/inbound-core\\/offer\\/submit\",\"logger_name\":\"com.server.authentication.AuthorizationFilter\",\"class\":\"<unknown>\"}",
    "class": "<unknown>",
    "source": "C:\\wildfly-8.2.0.Final\\standalone\\log\\application_log.log",
    "input": {
      "type": "log"
    },
    "method": "<unknown>",
    "prospector": {
      "type": "log"
    },
    "tags": [
      "beats_input_codec_plain_applied"
    ],
    "source_host": "localhost",
    "type": "log4j",
    "@version": 1,
    "fields": {
      "environment": "QA1"
    },
    "mdc": {},
    "level": "WARN",
    "host": {
      "name": "PL-L-R90HDHP7"
    },
    "log_message": "REQUESTED URI /inbound-core/rest/offer/submit",
    "beat": {
      "name": "PL-L-R90HDHP7",
      "hostname": "PL-L-R90HDHP7",
      "version": "6.5.1"
    },
    "logger_name": "com.server.authentication.AuthorizationFilter"
  }

另一个来自Jboss,它是文本

     2018-12-07 14:21:16,638 INFO  [stdout] (AsyncAppender-Dispatcher-Thread-99) 14:21:16,637 WARN  [org.hibernate.mapping.RootClass] HHH000038: Composite-id class does not override equals(): com.server.entities.jpa.AwardsVEntity

我只想对文本日志条目应用grok过滤器。 我将它与if!〜(不包含字符串)“ source_host”

进行匹配。

在Windows计算机上,它可以完美运行,但在Linux上却不能。 Linux上的IF条件将被忽略(永远不匹配)。因此,grok应用于json日志条目,结果在输出中被破坏。

都安装了Java 1.8.0091,但是在Windows上是HotSpot,在Linux上默认是OpenJDK。 两种环境都安装了Elastic 6.5.1。

logstash配置

# The # character at the beginning of a line indicates a comment. Use
# comments to describe your configuration.
input {
  beats {
    port => 5044
    type => "log4j"
  }


}

# parse JBOSS log in text format to JSON fields
filter {
  if [message] !~ "source_host" {
     grok {
         match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} +\[%{DATA:logger_name}\] +\(%{DATA:thread_name}\) %{GREEDYDATA:log_message}" }
         add_field => [ "received_at", "%{@timestamp}" ]
         add_field => [ "received_from", "%{host}" ]
         add_field => [ "fields.environment", "JBOSS" ]
    } 
  }
}


output {
  stdout { codec => json_lines }

  elasticsearch {
    # point to your elasticsearch host
    hosts => ["localhost:9200"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

0 个答案:

没有答案