Kubernetes中带有头盔的Logstash:grok过滤器不起作用

时间:2018-09-11 07:31:59

标签: elasticsearch kubernetes logstash logstash-grok kubernetes-helm

我在Kubernetes中安装了文件拍-> logstash-> elasticsearch-> kibana堆栈,并带有头盔图:

helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install --name elastic --namespace monitoring incubator/elasticsearch --set client.replicas=1,master.replicas=2,data.replicas=1

helm install --name logstash --namespace monitoring incubator/logstash -f logstash_values.yaml

helm install --name filebeat stable/filebeat -f filebeat_values.yaml

helm install stable/kibana --name kibana --namespace monitoring 

日志在ES中建立了索引,但“消息”包含整个字符串,而不是已定义的字段。我的grok过滤器似乎在logstash conf中不起作用。

https://github.com/helm/charts/tree/master/incubator/logstash上没有关于如何设置模式的文档。

这是我尝试过的:

我的日志格式:

10-09-2018 11:57:55.906 [Debug] [LOG] serviceName - Technical - my specific message - correlationId - userId - data - operation - error - stackTrace escaped on one line

logstash_values.yaml(来自https://github.com/helm/charts/blob/master/incubator/logstash/values.yaml):

elasticsearch:
  host: elasticsearch-client.default.svc.cluster.local
  port: 9200

patterns:
   main: |-
     (?<time>(?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)\.(?:[0-9]){3})} [(?<logLevel>.*)] [(?<code>.*)] (?<caller>.*) - (?<logMessageType>.*) - (?<message>.*) - (?<correlationId>.*) - (?<userId>.*) - (?<data>.*) - (?<operation>.*) - (?<error>.*) - (?<stackTrace>.*)

inputs:
  main: |-
    input {
      beats {
        port => 5044
      }
    }

filters:

outputs:
  main: |-
    output {
      elasticsearch {
        hosts => ["${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"]
        manage_template => false
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        document_type => "%{[@metadata][type]}"
      }
    }

这将成为Kubernetes configMap“ logstash-patterns”:

apiVersion: v1
kind: ConfigMap
data:
  main: (?<time>(?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)\.(?:[0-9]){3}) [(?<code>.*)] [(?<logLevel>.*)] (?<service>.*) - (?<logMessageType>.*) - (?<message>.*) - (?<correlationId>.*) - (?<userId>.*) - (?<data>.*) - (?<operation>.*) - (?<error>.*) - (?<stackTrace>.*)

在logstash窗格中没有看到任何错误日志。

您是否知道如何在Kubernetes中以logstash配置模式?

谢谢。

1 个答案:

答案 0 :(得分:1)

我误用了“模式”和“过滤器”。

在Helm图表中,“模式”用于指定我们的自定义grok模式(https://grokdebug.herokuapp.com/patterns):

  

MY_CUSTOM_ALL_CHARS。*

我的grok过滤器应位于过滤器部分:

patterns:
  # nothing here for me 

filters:
  main: |-
    filter {
      grok {
        match => { "message" => "\{%{TIMESTAMP_ISO8601:time}\} \[%{DATA:logLevel}\] \[%{DATA:code}\] %{DATA:caller} &\$ %{DATA:logMessageType} &\$ %{DATA:message} &\$ %{DATA:correlationId} &\$ %{DATA:userId} &\$ %{DATA:data} &\$ %{DATA:operation} &\$ %{DATA:error} &\$ (?<stackTrace>.*)" }
        overwrite => [ "message" ]
      }
      date {
        match => ["time", "ISO8601"]
        target => "time"
      }
    }