我一直在使用logstash一个月,而且从一周前开始我似乎无法启动它。 Logstash在带有linux的机器上的docker映像版本6.2.4上运行。我之前工作正常,所以我不知道发生了什么。我的老板唯一做的就是从版本6.2.3升级到6.2.4,但错误并没有在那一刻开始,而是几天后,所以我猜这不是问题。
我有一个logstash.conf文件,它有我的配置,内部日志指向“配置文件”中的特定行,但有趣的是这行不存在,因为文件的行数较少。我读到logstash合并所有conf文件的地方,但我似乎无法找到合并的文件来检查该行。我很沮丧。
我将粘贴logstash.conf文件:
input {
beats {
port => "5044"
client_inactivity_timeout => "120"
}
}
filter {
ruby {
code => "event.set('day',event.get('source').split('/')[5].split('-').last)"
}
ruby {
code => "event.set('app',event.get('source').split('/')[5].split('-').first)"
}
ruby {
code => "event.set('categoria',event.get('source').split('/').last.split('.').first.split('-')[1])"
}
ruby {
code => "event.set('nodo',event.get('source').split('/')[4])"
}
ruby {
code => "event.set('conFecha',true)"
}
if [day] == "A"{
ruby {
code => "event.set('day',Time.now.getlocal('-03:00').strftime('%Y%m%d'))"
}
ruby {
code => "event.set('conFecha',false)"
}
}
grok {
patterns_dir => ["/usr/share/logstash/pipeline/pattern/patterns"]
match => { "message" => "%{SV_TIME:time}, %{SV_TIMESTAMP:numero}, cliente\[%{DATA:client}\], %{WORD:level} , performance - (.)* .*\[%{NUMBER:milliseconds:float}\].*\[com.vtr.servicesvtr.ws.client.factory.([a-z])*.*%{WORD:crm}.%{WORD:grupo}.%{WORD:method}.*.(ejecutar)\]" }
add_field => {
"tipo" => "SOA"
"fechahora" => "%{day} %{time}"
"performance" => "performance"
}
}
grok {
patterns_dir => ["/usr/share/logstash/pipeline/pattern/patterns"]
match => { "message" => "%{SV_TIME:time}, %{SV_TIMESTAMP:numero}, cliente\[%{DATA:client}\], %{WORD:level} , performance - (.)* .*\[%{NUMBER:milliseconds:float}\].*\[/%{SV_PACKAGE:microservicio}/%{DATA:url}\]" }
add_field => {
"tipo" => "MS"
"fechahora" => "%{day} %{time}"
"performance" => "performance"
}
}
if [performance] != "performance"{
grok {
patterns_dir => ["/usr/share/logstash/pipeline/pattern/patterns"]
match => { "message" => "%{SV_TIME:time}, %{SV_TIMESTAMP:numero}, cliente\[%{DATA:client}\], %{WORD:level}.*\, %{GREEDYDATA:resultado}" }
add_field => {
"tipo" => "AUDIT"
"fechahora" => "%{day} %{time}"
"auditoria" => "auditoria"
}
}
}
if [performance] != "performance" and [auditoria] != "auditoria"{
grok {
patterns_dir => ["/usr/share/logstash/pipeline/pattern/patterns"]
match => { "message" => "%{SV_DATE_TIME:fecha}.* seguridad \- %{DATA:usuario}\|%{DATA:rut}\|%{DATA:resultado} \-> %{GREEDYDATA:metodo}" }
add_field => {
"tipo" => "SEGURIDAD"
"fechahora" => "%{fecha}"
"su" => "su"
}
}
mutate {
lowercase => [ "usuario" ]
}
date {
match => ["fechahora", "yyyy-MM-dd HH:mm:ss,SSS"]
}
}
if [su] != "su"{
date {
match => ["fechahora", "yyyyMMdd HH:mm:ss.SSS"]
}
}
date {
match => ["timestamp" , "yyyyMMdd'T'HH:mm:ss.SSS"]
target => "@timestamp"
}
}
output {
if [performance] == "performance" and [url] != "manage/health"{
if [conFecha] {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[app_id]}-performances-%{+YYYY.MM.dd}"
}
}else {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "temp-%{[app_id]}-performances-%
{+YYYY.MM.dd}"
}
}
}else if [auditoria] == "auditoria" {
if [conFecha] {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[app_id]}-auditoria-%{+YYYY.MM.dd}"
}
}else {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "temp-%{[app_id]}-auditoria-%{+YYYY.MM.dd}"
}
}
}
if [performance] == "performance" and [milliseconds] >= 1000 and [url] != "manage/health"{
if [conFecha] {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[app_id]}-performance-scache-%{+YYYY.MM.dd}"
}
}else {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "temp-%{[app_id]}-performance-scache-%{+YYYY.MM.dd}"
}
}
}
}
我还会粘贴日志:
Sending Logstash's logs to /usr/share/logstash/logs which is now
configured via log4j2.properties
[2018-04-27T15:59:06,770][INFO ][logstash.modules.scaffold]
Initializing module {:module_name=>"fb_apache",
:directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-04-27T15:59:06,879][INFO ][logstash.modules.scaffold]
Initializing module {:module_name=>"netflow",
:directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-04-27T15:59:13,551][INFO ][logstash.runner ] Starting
Logstash {"logstash.version"=>"6.2.4"}
[2018-04-27T15:59:17,073][INFO ][logstash.agent ]
Successfully started Logstash API endpoint {:port=>9600}
[2018-04-27T15:59:27,711][ERROR][logstash.agent ] Failed to
execute action
{:action=>LogStash::PipelineAction::Create/pipeline_id:main,
:exception=>"LogStash::ConfigurationError", :message=>"Expected one of
#, input, filter, output at line 205, column 1 (byte 6943) after ",
:backtrace=>["/usr/share/logstash/logstash-
core/lib/logstash/compiler.rb:42:in `compile_imperative'",
"/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:50:in
`compile_graph'", "/usr/share/logstash/logstash-
core/lib/logstash/compiler.rb:12:in `block in compile_sources'",
"org/jruby/RubyArray.java:2486:in `map'",
"/usr/share/logstash/logstash-
core/lib/logstash/compiler.rb:11:in `compile_sources'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:51:in
`initialize'", "/usr/share/logstash/logstash-
core/lib/logstash/pipeline.rb:169:in `initialize'",
"/usr/share/logstash/logstash-
core/lib/logstash/pipeline_action/create.rb:40:in `execute'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:315:in
`block
in converge_state'", "/usr/share/logstash/logstash-
core/lib/logstash/agent.rb:141:in `with_pipelines'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:312:in
`block
in converge_state'", "org/jruby/RubyArray.java:1734:in `each'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:299:in `
converge_state'", "/usr/share/logstash/logstash-
core/lib/logstash/agent.rb:166:in `block in
converge_state_and_update'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in
`with_pipelines'", "/usr/share/logstash/logstash-
core/lib/logstash/agent.rb:164:in `converge_state_and_update'",
"/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in
`execute'", "/usr/share/logstash/logstash-
core/lib/logstash/runner.rb:348:in `block in execute'",
"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-
0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}
我真的希望你能帮助我们。我很绝望。再见
答案 0 :(得分:0)
通过删除config目录中的所有文件来解决问题,因为logstash正在合并所有文件。在这种情况下,它是uup文件。无论如何,由于docker镜像,logstash稍后再次创建它们。