我试图借助filebeat和logstash在elastichsearch中建立索引。这是filebeat.yml:
filebeat.inputs:
- type: docker
combine_partial: true
containers:
path: "/usr/share/dockerlogs/data"
stream: "stdout"
ids:
- "*"
exclude_files: ['\.gz$']
ignore_older: 10m
processors:
# decode the log field (sub JSON document) if JSON encoded, then maps it's fields to elasticsearch fields
- decode_json_fields:
fields: ["log", "message"]
target: ""
# overwrite existing target elasticsearch fields while decoding json fields
overwrite_keys: true
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
# setup filebeat to send output to logstash
output.logstash:
hosts: ["xxx.xx.xx.xx:5044"]
# Write Filebeat own logs only to file to avoid catching them with itself in docker log files
logging.level: info
logging.to_files: false
logging.to_syslog: false
loggins.metrice.enabled: false
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
ssl.verification_mode: none
这是logstash.conf:
input
{
beats {
port => 5044
host => "0.0.0.0"
}
}
output
{
stdout {
codec => dots
}
elasticsearch {
hosts => "http://xxx.xx.xx.x:9200"
index => "%{[docker][container][labels][com][docker][swarm][service][name]}-%{+xxxx.ww}"
}
}
我正在尝试使用docker名称建立索引,因此它比我们经常看到的通常模式(如“ filebeat-xxxxxx.some-date”)更具可读性和清晰度。 我尝试了几件事:
- index => "%{[docker][container][labels][com][docker][swarm][service][name]}-%{+xxxx.ww}"
- index => "%{[docker][container][labels][com][docker][swarm][service][name]}-%{+YYYY.MM}"
- index => "%{[docker][swarm][service][name]}-%{+xxxx.ww}"
但是没有任何效果。我究竟做错了什么 ?也许我在filebeat.yml文件中做错了什么或缺少了什么。也可能是这样。 感谢您的帮助或帮助。
答案 0 :(得分:1)
好像您不确定要添加哪些docker元数据字段。最好先使用默认索引名称(例如“ filebeat-xxxxxx.some-date”或其他名称)成功建立索引,然后查看日志事件以查看docker元数据字段的格式。 >
我的设置与您不同,但是作为参考,我在AWS ECS上,所以我的docker字段的格式为:
"docker": {
"container": {
"name": "",
"labels": {
"com": {
"amazonaws": {
"ecs": {
"cluster": "",
"container-name": "",
"task-definition-family": "",
"task-arn": "",
"task-definition-version": ""
}
}
}
},
"image": "",
"id": ""
}
}
看到可用的格式和字段后,我可以使用上述内容添加自定义的“ application_name”字段。此字段是在我的输入插件中生成的,在我的情况下为redis,但所有输入插件都应具有add_field选项(https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html#plugins-inputs-beats-add_field):
input {
redis {
host => "***"
data_type => "list"
key => "***"
codec => json
add_field => {
"application_name" => "%{[docker][container][labels][com][amazonaws][ecs][task-definition-family]}"
}
}
}
在获得这个新的自定义字段之后,我能够为不同的“ application_name”字段运行特定的过滤器(grok,json,kv等),因为它们具有不同的日志格式,但是对您而言重要的是您可以在输出到Elasticsearch的索引名称中使用它:
output {
elasticsearch {
user => ***
password => ***
hosts => [ "***" ]
index => "logstash-%{application_name}-%{+YYY.MM.dd}"
}
}