我正在尝试解析tomcat日志并将输出传递给弹性搜索。或多或少它运作良好。当我看到弹性搜索索引数据时,它包含许多匹配数据,其标记字段为_grokparsefailure
。
这导致了大量重复的匹配数据。为了避免这种情况,我试图在标签包含_grokparsefailure
时删除事件。此配置写在grok filter下面的logstash.conf文件中。仍然输出到弹性搜索包含带有_grokparsefailure
标记的索引文档。
如果grok失败,我不希望该匹配转到弹性搜索,因为它会在弹性搜索中导致重复数据。
logstash.conf
文件是:
input {
file {
path => "/opt/elasticSearch/logstash-1.4.2/input.log"
codec => multiline {
pattern => "^\["
negate => true
what => previous
}
start_position => "end"
}
}
filter {
grok {
match => [
"message", "^\[%{GREEDYDATA}\] %{GREEDYDATA} Searching hotels for country %{GREEDYDATA:country}, city %{GREEDYDATA:city}, checkin %{GREEDYDATA:checkin}, checkout %{GREEDYDATA:checkout}, roomstay %{GREEDYDATA:roomstay}, No. of hotels returned is %{NUMBER:hotelcount} ."
]
}
if "_grokparsefailure" in [tags]{
drop { }
}
}
output {
file {
path => "/opt/elasticSearch/logstash-1.4.2/output.log"
}
elasticsearch {
cluster => "elasticsearchdev"
}
}
弹性搜索响应http://172.16.37.97:9200/logstash-2015.12.23/_search?pretty=true
鉴于以下输出包含三个文档,其中首先包含_source中的_grokparsefailure - >标签字段。
我不想在此输出中使用它。所以可能需要将其从logstash中限制,以便它不会进行弹性搜索。
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 3,
"max_score" : 1.0,
"hits" : [
{
"_index" : "logstash-2015.12.23",
"_type" : "logs",
"_id" : "J6CoEhKaSE68llz5nEbQSQ",
"_score" : 1.0,
"_source":{"message":"[2015-12-23 12:08:40,124] ERROR http-80-5_@{AF3AF784EC08D112D5D6FC92C78B5161,127.0.0.1,1450852688060} com.mmt.hotels.web.controllers.search.HotelsSearchController - Searching hotels for country IN, city DEL, checkin 28-03-2016, checkout 29-03-2016, roomstay 1e0e, No. of hotels returned is 6677 .","@version":"1","@timestamp":"2015-12-23T14:17:03.436Z","host":"ggn-37-97","path":"/opt/elasticSearch/logstash-1.4.2/input.log","tags":["_grokparsefailure"]}
},
{
"_index" : "logstash-2015.12.23",
"_type" : "logs",
"_id" : "2XMc6nmnQJ-Bi8vxigyG8Q",
"_score" : 1.0,
"_source":{"@timestamp":"2015-12-23T14:17:02.894Z","message":"[2015-12-23 12:08:40,124] ERROR http-80-5_@{AF3AF784EC08D112D5D6FC92C78B5161,127.0.0.1,1450852688060} com.mmt.hotels.web.controllers.search.HotelsSearchController - Searching hotels for country IN, city DEL, checkin 28-03-2016, checkout 29-03-2016, roomstay 1e0e, No. of hotels returned is 6677 .","@version":"1","host":"ggn-37-97","path":"/opt/elasticSearch/logstash-1.4.2/input.log","country":"IN","city":"DEL","checkin":"28-03-2016","checkout":"29-03-2016","roomstay":"1e0e","hotelcount":"6677"}
},
{
"_index" : "logstash-2015.12.23",
"_type" : "logs",
"_id" : "fKLqw1LJR1q9YDG2yudRDw",
"_score" : 1.0,
"_source":{"@timestamp":"2015-12-23T14:16:12.684Z","message":"[2015-12-23 12:08:40,124] ERROR http-80-5_@{AF3AF784EC08D112D5D6FC92C78B5161,127.0.0.1,1450852688060} com.mmt.hotels.web.controllers.search.HotelsSearchController - Searching hotels for country IN, city DEL, checkin 28-03-2016, checkout 29-03-2016, roomstay 1e0e, No. of hotels returned is 6677 .","@version":"1","host":"ggn-37-97","path":"/opt/elasticSearch/logstash-1.4.2/input.log","country":"IN","city":"DEL","checkin":"28-03-2016","checkout":"29-03-2016","roomstay":"1e0e","hotelcount":"6677"}
} ]
}
}
答案 0 :(得分:6)
您可以尝试在_grokparsefailure
部分测试output
,如下所示:
output {
if "_grokparsefailure" not in [tags] {
file {
path => "/opt/elasticSearch/logstash-1.4.2/output.log"
}
elasticsearch {
cluster => "elasticsearchdev"
}
}
}
答案 1 :(得分:1)
有时候你可能会有多个grok过滤器,其中一些可能会因某些事件而失败,但会通过休息,基于_grokparsefailure丢弃事件将无法解决问题。
示例:
input
{
some input
}
filter
{
grok1 {extract ip to my_ip1}
grok2 {extract ip to my_ip2}
grok3 {extract ip to my_ip3}
}
output
{
if "_grokparsefailure" not in [tags] { <-- This will not write to output if any single grok fails.
some output
}
}
我在这里的解决方案是根据一些变量过滤掉。其他更好的方法在这里??? 例如:
if "10." in ["ip1"] or "10." in ["ip2"] or "10." in ["ip3"]
{
drop{}
}