I have a Kubernetes pod consisting of two containers - main app (writes logs to file on volume) and Fluentd sidecar that tails log file and writes to Elasticsearch.
Here is the Fluentd configuration:
<source>
type tail
format none
path /test/log/system.log
pos_file /test/log/system.log.pos
tag anm
</source>
<match **>
@id elasticsearch
@type elasticsearch
@log_level debug
time_key @timestamp
include_timestamp true
include_tag_key true
host elasticsearch-logging.kube-system.svc.cluster.local
port 9200
logstash_format true
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 8
overflow_action block
</buffer>
</match>
Everything is working, Elasticsearch host & port are correct since API works correctly on that URL. In Kibana I see only records every 5 seconds about Fluentd creating new chunk:
2018-12-03 12:15:50 +0000 [debug]: #0 [elasticsearch] Created new chunk chunk_id="57c1d1c105bcc60d2e2e671dfa5bef04" metadata=#<struct Fluent::Plugin::Buffer::Metadata timekey=nil, tag="anm", variables=nil>
but no actual logs in Kibana (the ones that are being written by the app to system.log file). Kibana is configured to the "logstash-*" index pattern that matches the one and only existing index.
Version of Fluentd image: k8s.gcr.io/fluentd-elasticsearch:v2.0.4
Version of Elasticsearch: k8s.gcr.io/elasticsearch:v6.3.0
Where can I check to find out what's wrong? Looks like Fluentd does not get to put the logs into Elasticsearch, but what can be the reason?
答案 0 :(得分:0)
答案很简单,可能会在将来对某人有所帮助。
我发现问题出在此源配置行:
<source>
...
format none
...
</source>
这意味着在保存到elasticsearch时无需添加任何常规标签(例如pod或容器名称),而我不得不以完全不同的方式在Kibana中搜索这些记录。例如,我使用自己的标签搜索这些记录,然后发现它们很好。自定义标签最初是为了以防万一而添加的,但结果却非常有用:
<source>
...
tag anm
...
</source>
因此,最后的收获可能是以下内容。请谨慎使用“不格式化”,如果源数据实际上是非结构化的,则添加您自己的标签,并可能使用fluentd的record_transformer来添加其他标签/信息(例如“主机名”等),最后也做。这样,通过Kibana定位记录将更加容易。