Logstash:值太大而无法输出

时间:2016-11-15 19:54:54

标签: elasticsearch jboss logstash elastic-stack logstash-grok

最近使用版本5.0.0-1构建的ELK堆栈

使用1个多行过滤器修改jboss日志时,我看到以下错误:

[2016-11-14T19:48:48,802][ERROR][logstash.filters.grok    ] Error while attempting to check/cancel excessively long grok patterns {:message=>"Mutex relocking by same thread", :class=>"ThreadError", :backtrace=>["org/jruby/ext/thread/Mutex.java:90:in `lock'", "org/jruby/ext/thread/Mutex.java:147:in `synchronize'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-grok-3.2.3/lib/logstash/filters/grok/timeout_enforcer.rb:38:in `stop_thread_groking'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-grok-3.2.3/lib/logstash/filters/grok/timeout_enforcer.rb:53:in `cancel_timed_out!'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-grok-3.2.3/lib/logstash/filters/grok/timeout_enforcer.rb:45:in `cancel_timed_out!'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-grok-3.2.3/lib/logstash/filters/grok/timeout_enforcer.rb:44:in `cancel_timed_out!'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-grok-3.2.3/lib/logstash/filters/grok/timeout_enforcer.rb:63:in `start!'"]}
[2016-11-14T19:48:48,802][WARN ][logstash.filters.grok    ] Timeout executing grok '%{DATA:prefixofMessage}<tXML>%{DATA:orderXML}</tXML>' against field 'message' with value 'Value too large to output (27191 bytes)! First 255 chars are: 2016-10-30 23:28:02,193 INFO  [nucleusNamespace.com.NAMESPACEREDACTED.NAMESPACEREDACTED.NAMESPACEREDACTED] (ajp-IPADDRESSREDACTED-PORTREDACTED-325) DEBUG  NAMEREDACTED | order xml ----------- <?xml version="1.0" encoding="UTF-8" standalone="yes"?>

同样的过滤器在2.4下工作得很好,但现在在5.0.0-1上运行相同的过滤器我看到了这一点。

有没有人在这个版本的ELK堆栈中看到过这个?

1 个答案:

答案 0 :(得分:0)

这已在https://github.com/logstash-plugins/logstash-filter-grok/pull/98中修复。您可以立即升级插件,也可以等待Logstash 5.0.1