以下是我在centos6 logstash服务器中的配置。我使用的是logstash 1.4.2和elasticsearch 1.2.1。我从/ var / log / messages和/ var / log / secure转发日志,时间格式为" Sep 1 22:15:34"
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "certs/logstash-forwarder.crt"
ssl_key => "private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
locale => "en" // possibly this didn't work in logstash 1.4.2
match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601"]
add_field => { "debug" => "timestampMatched"}
timezone => "UTC"
}
ruby { code => "event['@timestamp'] = event['@timestamp'].getlocal"} //I saw somewhere instead of "locale => en " we have to use this in logstash 1.4.2
mutate { replace => [ "syslog_timestamp", "%{syslog_timestamp} +0545" ] } //this probably won't work and give date parsing error
}
}
output {
elasticsearch { host => "logstash_server_ip" }
stdout { codec => rubydebug }
}
以下是所有客户端服务器中的logstash-forwarder conf
{
"network": {
"servers": [ "logstash_server_ip:5000" ],
"timeout": 15,
"ssl ca": "certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/messages",
"/var/log/secure"
],
"fields": { "type": "syslog" }
}
]
}
这是问题所在。我正在从具有不同时区的5台服务器转发日志,例如:EDT,NDT,NST,NPT。 logstash_server时区为NPT(尼泊尔时间)[UTC + 5:45]
所有服务器提供以下
2014/09/02 08:09:02.204882 Setting trusted CA from file: certs/logstash-forwarder.crt
2014/09/02 08:09:02.205372 Connecting to logstash_server_ip:5000 (logstash_server_ip)
2014/09/02 08:09:02.205600 Launching harvester on new file: /var/log/secure
2014/09/02 08:09:02.205615 Starting harvester at position 5426763: /var/log/messages
2014/09/02 08:09:02.205742 Current file offset: 5426763
2014/09/02 08:09:02.279715 Starting harvester: /var/log/secure
2014/09/02 08:09:02.279756 Current file offset: 12841221
2014/09/02 08:09:02.638448 Connected to logstash_server_ip
2014/09/02 08:09:09.998098 Registrar received 1024 events
2014/09/02 08:09:15.189079 Registrar received 1024 events
我希望它是好的,但只有一个带有时区NPT转发日志,我能够在kibana中看到它,所有其他人给了我上面的日志,但我无法在kibana中看到它。我认为问题出在DATE,因为它无法解析来自不同服务器的日期。此外,logstash中没有日志显示错误。
在这种情况下如何解决问题?
答案 0 :(得分:0)
在logstash-forwarder配置更改
中"fields": { "type": "syslog" }
到
"fields": { "type": "syslog", "syslog_timezone": "Asia/Kathmandu" }
将filter.conf更改为
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
# Set timezone appropriately
if [syslog_timezone] in [ "Asia/Kathmandu" ] {
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
remove_field => [ "syslog_timezone" ]
timezone => "Asia/Kathmandu"
}
} else if [syslog_timezone] in [ "America/Chicago", "US/Central" ] {
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
remove_field => [ "syslog_timezone" ]
timezone => "America/Chicago"
}
} else if [syslog_timezone] =~ /.+/ {
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
add_tag => [ "unknown_timezone" ]
timezone => "Etc/UTC"
}
} else {
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
timezone => "Etc/UTC"
}
}
}
}