我需要帮助来配置Fluentd以根据严重性过滤日志。
我们有2个不同的监控系统Elasticsearch和Splunk,当我们在应用程序中启用日志级别DEBUG时,它每天会生成大量日志,因此我们希望基于日志的严重性过滤并将其推送到2个不同的日志系统中。
当日志具有严重性:INFO和ERROR,然后将容器日志转发到Splunk,除那些DEBUG,TRACE,WARN和其他日志之外,其他日志都应该进入elastocsearch,请帮助我如何进行过滤。
以下是日志生成的格式:
event.log:{“ @ severity”:“ DEBUG”,“ @ timestamp”:“ 2019-01-18T00:15:34.416Z”,“ @ traceId”:
event.log:{“ @ severity”:“ INFO”,“ @ timestamp”:“ 2019-01-18T00:15:34.397Z”,“ @ traceId”:
event.log:{“ @ severity”:“ WARN”,“ @ timestamp”:“ 2019-01-18T00:15:34.920Z”,“ @ traceId”:
请找到以下流利的配置。
我在过滤器中添加了exclude方法,还安装了grep插件,添加了grep方法,
添加了用于测试的过滤器:
<exclude>
@type grep
key severity
pattern DEBUG
</exclude>
还添加了:
<filter kubernetes.**>
@type grep
exclude1 severity (DEBUG|NOTICE|WARN)
</filter>
kind: ConfigMap
apiVersion: v1
metadata:
name: fluentd-config
namespace: logging
labels:
k8s-app: fluentd
data:
fluentd-standalone.conf: |
<match fluent.**>
@type null
</match>
# include other configs
@include systemd.conf
@include kubernetes.conf
fluentd.conf: |
@include systemd.conf
@include kubernetes.conf
fluentd.conf: |
# Use the config specified by the FLUENTD_CONFIG environment variable, or
# default to fluentd-standalone.conf
@include "#{ENV['FLUENTD_CONFIG'] || 'fluentd-standalone.conf'}"
kubernetes.conf: |
<source>
@type tail
@log_level debug
path /var/log/containers/*.log
pos_file /var/log/kubernetes.log.pos
time_format %Y-%m-%dT%H:%M:%S.%NZ
tag kubernetes.*
format json
</source>
<filter kubernetes.**>
@type kubernetes_metadata
verify_ssl false
<exclude>
@type grep
key severity
pattern DEBUG
</exclude>
</filter>
<filter kubernetes.**>
@type record_transformer
enable_ruby
<record>
event ${record}
</record>
renew_record
auto_typecast
</filter>
<filter kubernetes.**>
@type grep
exclude1 severity (DEBUG|NOTICE|WARN)
</filter>
kubernetes.conf: |
<source>
@type tail
@log_level debug
path /var/log/containers/*.log
pos_file /var/log/kubernetes.log.pos
time_format %Y-%m-%dT%H:%M:%S.%NZ
tag kubernetes.*
format json
</source>
<filter kubernetes.**>
@type kubernetes_metadata
verify_ssl false
</filter>
<filter kubernetes.**>
@type record_transformer
enable_ruby
<record>
event ${record}
</record>
renew_record
auto_typecast
</filter>
# The `all_items` paramater isn't documented, but it is necessary in order for
# us to be able to send k8s events to splunk in a useful manner
<match kubernetes.**>
@type copy
<store>
@type splunk-http-eventcollector
all_items true
server localhost:8088
protocol https
verify false
</store>
<store>
@type elasticsearch
host localhost
port 9200
scheme http
ssl_version TLSv1_2
ssl_verify false
</buffer>
</store>
</match>
答案 0 :(得分:0)
以下内容如何? (未测试)
<source>
@type tail
@log_level debug
path /var/log/containers/*.log
pos_file /var/log/kubernetes.log.pos
time_format %Y-%m-%dT%H:%M:%S.%NZ
tag kubernetes.*
format json
@label @INPUT
</source>
<label @INPUT>
<filter kubernetes.**>
@type kubernetes_metadata
verify_ssl false
</filter>
<filter kubernetes.**>
@type record_transformer
enable_ruby
<record>
event ${record}
</record>
renew_record
auto_typecast
</filter>
<match>
@type relabel
@label @RETAG
</match>
</label>
<label @RETAG>
<match>
@type rewrite_tag_filter
<rule>
key @severity
pattern /(INFO|ERROR)/
tag splunk.${tag}
</rule>
<rule>
key @severity
pattern /(DEBUG|TRACE|WARN)/
tag elasticsearch.${tag}
</rule>
@label @OUTPUT
</match>
</label>
<label @OUTPUT>
<match splunk.**>
@type splunk-http-eventcollector
# ... snip
</match>
<match elasticsearch.**>
@type elasticsearch
# ... snip
</match>
</label>