我正在使用OSSEC收集日志,并使用logstash-forwarder将JSON日志转发到logstash。这是我的logstash配置。
input {
lumberjack {
port => 10516
type => "lumberjack"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
codec => json
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
host => localhost
}
}
我想提取括号内“位置”字段上指示的主机并创建专用标记,因为logstash只将OSSEC视为源主机,因为它转发日志。下面是logstash的示例输出。
{
"_index": "logstash-2015.09.23",
"_type": "ossec-alerts",
"_id": "AU_4Q1Hc5OjGfEBnRiWa",
"_score": null,
"_source": {
"rule": {
"level": 3,
"comment": "Nginx error message.",
"sidid": 31301
},
"srcip": "192.168.192.10",
"location": "(logstash) 192.168.212.104->/var/log/nginx/error.log",
"full_log": "2015/09/23 11:33:24 [error] 1057#0: *562 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.192.10, server: _, request: \"POST /elasticsearch/.kibana/__kibanaQueryValidator/_validate/query?explain=true&ignore_unavailable=true HTTP/1.1\", upstream: \"http://[::1]:5601/elasticsearch/.kibana/__kibanaQueryValidator/_validate/query?explain=true&ignore_unavailable=true\", host: \"192.168.212.104\", referrer: \"http://192.168.212.104/\"",
"@version": "1",
"@timestamp": "2015-09-23T03:33:25.588Z",
"type": "ossec-alerts",
"file": "/var/ossec/logs/alerts/alerts.json",
"host": "ossec",
"offset": "51048"
},
"fields": {
"@timestamp": [
1442979205588
]
},
"sort": [
1442979205588
]
}
答案 0 :(得分:1)
一旦你应用了json {}过滤器,你就会留下一堆字段。您现在可以对这些字段应用更多过滤器,包括用于制作更多字段的grok {}!
答案 1 :(得分:1)
您需要的是 grok filter 。您可以使用grok debugger为您找到最佳模式。以下模式适用于您的location
字段:
\(%{HOST:host}\) %{IP:srcip}->%{PATH:path}
在logstash过滤器部分:
grok {
match => { "location" => "\(%{HOST:host}\) %{IP:srcip}->%{PATH:path}" }
overwrite => [ "host", "srcip" ]
}
overwrite
是必要的,因为您已经有字段host
和srcip
。