它认为弹性搜索的内存增长无限,环境变量ES_MIN_MEM和ES_MAX_MEM不能很好地运行。如果我发现了什么问题,我会回来修改这个问题。
我发现自己也犯了一个错误。如果日志不是太多,则会从列表中弹出logstash并删除该项。但是如果logstash或elasticsearch被阻止,那么redis键的长度会无限增长。感谢您的帮助,我想这个问题可能会被关闭。
以下是原始问题:
当我在发货人节点使用静态密钥(不使用%{type}等)时,密钥的长度将从我们启动监控系统时变得越来越大。但是在redis中,删除过时日志的一般方法是为不同的密钥设置TTL。那么我们可以在同一个密钥下删除早期的日志,同时保留后面的日志。 或者我们有其他方法可以使用redis作为缓存并避免内存溢出?谢谢! 以下是我的配置文件:
文件:shipper.conf
input {
file {
type => "ceph-daemons"
path => "/var/log/ceph/ceph-*.log"
start_position => "end"
}
file {
type => "ceph-activity"
path => "/var/log/ceph/ceph.log"
start_position => "end"
}
file {
type => "nova"
path => "/var/log/nova/*.log"
start_position => "end"
}
}
output {
stdout{ }
redis {
host => "10.1.0.154"
data_type => "list"
key => "logstash"
}
}
文件:central.conf
input {
redis {
host => "10.1.0.154"
type => "redis-input"
data_type => "list"
key => "logstash"
}
}
output {
stdout{ }
elasticsearch {
cluster => "logstash"
}
}
我在logstash docs中找到了以下内容:
data_type
Value can be any of: "list", "channel", "pattern_channel"
There is no default value for this setting.
Specify either list or channel. If redis\_type is list, then we will BLPOP the key. If redis\_type is channel, then we will SUBSCRIBE to the key. If redis\_type is pattern_channel, then we will PSUBSCRIBE to the key. TODO: change required to true
在redis docs中:
When BLPOP returns an element to the client, it also removes the element from the list. This means that the element only exists in the context of the client: if the client crashes while processing the returned element, it is lost forever.
在阅读这些文档时我错了吗?