我现在尝试了几个日志收集服务,比如logspout / papertrail和fluentd / elasticsearch,但结果并不总是以正确的顺序显示,这会使调试变得困难。一个例子是Node.js应用程序,console.log
命令导致多行,或者其堆栈跟踪错误。这些行都显示相同的时间戳,我想日志收集服务无法知道显示那些的顺序。有没有办法增加毫秒精度?或者其他一些方法来确保它们以与我docker logs
命令相同的顺序显示?
更新:我还没有调查过,但我看到一些关于流利或弹性搜索的内容默认情况下支持毫秒级+精确度
答案 0 :(得分:1)
据我了解,您有两种选择:
答案 1 :(得分:0)
虽然我仍然喜欢真正的解决方案,但我找到了一个流利的this answer解决方法
这是我修改过的td-agent.conf,用于fluentd-es-image。它添加了time_nano
字段,可以在
<source>
type tail
format json
time_key time
path /varlog/containers/*.log
pos_file /varlog/es-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%L%Z
tag cleanup.reform.*
read_from_head true
</source>
<match cleanup.**>
type record_reformer
time_nano ${t = Time.now; ((t.to_i * 1000000000) + t.nsec).to_s}
tag ${tag_suffix[1]}
</match>
<match reform.**>
type record_reformer
enable_ruby true
tag kubernetes.${tag_suffix[3].split('-')[0..-2].join('-')}
</match>
<match kubernetes.**>
type elasticsearch
log_level info
include_tag_key true
host elasticsearch-logging.default
port 9200
logstash_format true
flush_interval 5s
# Never wait longer than 5 minutes between retries.
max_retry_wait 300
# Disable the limit on the number of retries (retry forever).
disable_retry_limit
</match>
<source>
type tail
format none
path /varlog/kubelet.log
pos_file /varlog/es-kubelet.log.pos
tag kubelet
</source>
<match kubelet>
type elasticsearch
log_level info
include_tag_key true
host elasticsearch-logging.default
port 9200
logstash_format true
flush_interval 5s
# Never wait longer than 5 minutes between retries.
max_retry_wait 300
# Disable the limit on the number of retries (retry forever).
disable_retry_limit
</match>