我一直试图解决以下问题,但没有任何成功(Logstash 2.1,Elasticsearch 2.1,Kibana 4.3.1)
这是我的logstash.conf文件
input {
file {
path => ["/var/log/network.log"]
start_position => "beginning"
type => "syslog"
tags => [ "netsyslog" ]
}
} #end input block
########################################
filter {
if [type] == "syslog" {
# Split the syslog part and Cisco tag out of the message
grok {
match => ["message", "%{CISCO_TAGGED_SYSLOG} %{GREEDYDATA:cisco_message}"]
}
# Parse the syslog severity and facility
#syslog_pri { }
# Parse the date from the "timestamp" field to the "@timestamp" field
# 2015-05-01T00:00:00+02:00 is ISO8601
grok {
match => ["message", "%{TIMESTAMP_ISO8601:timestamp}"]
}
date {
#2015-05-01T00:00:00+03:00
match => ["timestamp",
"yyyy-MM-dd'T'HH:mm:ssZ"
# "yyyy MM dd HH:mm:ss",
]
#timezone => "Asia/Kuwait"
}
# Clean up redundant fields if parsing was successful
if "_grokparsefailure" not in [tags] {
mutate {
rename => ["cisco_message", "message"]
remove_field => ["timestamp"]
}
}
# Extract fields from the each of the detailed message types
grok {
match => [
"message", "%{CISCOFW106001}",
"message", "%{CISCOFW106006_106007_106010}",
"message", "%{CISCOFW106014}",
"message", "%{CISCOFW106015}",
"message", "%{CISCOFW106021}",
"message", "%{CISCOFW106023}",
"message", "%{CISCOFW106100}",
"message", "%{CISCOFW110002}",
"message", "%{CISCOFW302010}",
"message", "%{CISCOFW302013_302014_302015_302016}",
"message", "%{CISCOFW302020_302021}",
"message", "%{CISCOFW305011}",
"message", "%{CISCOFW313001_313004_313008}",
"message", "%{CISCOFW313005}",
"message", "%{CISCOFW402117}",
"message", "%{CISCOFW402119}",
"message", "%{CISCOFW419001}",
"message", "%{CISCOFW419002}",
"message", "%{CISCOFW500004}",
"message", "%{CISCOFW602303_602304}",
"message", "%{CISCOFW710001_710002_710003_710005_710006}",
"message", "%{CISCOFW713172}",
"message", "%{CISCOFW733100}"
]
}
}
if [dst_ip] and [dst_ip] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
source => "dst_ip"
database => "/opt/logstash/vendor/GeoLiteCity.dat" ### Change me to location of GeoLiteCity.dat file
target => "dst_geoip"
}
}
if [src_ip] and [src_ip] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
source => "src_ip"
database => "/opt/logstash/vendor/GeoLiteCity.dat" ### Change me to location of GeoLiteCity.dat file
target => "src_geoip"
}
}
mutate {
convert => [ "[src_geoip][coordinates]", "float" ]
}
}
########################################
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => "localhost"
template => "/opt/logstash/elasticsearch-template.json"
template_overwrite => true
}
} #end output block
当我查找logstash.conf文件时,我可以看到它正在解析。但是,当我跑 curl' localhost:9200 / _cat / indices?v' 我得到那个.kibana就在那里 加载Kibana界面说无法获取映射。你有与模式匹配的指数吗?
任何帮助都将不胜感激。
提前致谢。
答案 0 :(得分:1)
初始调试建议是检查logstash和elasticsearch日志。如果您有映射冲突,elasticsearch将记录它并帮助您缩小范围。