Logstash索引错误 - 聚合插件:对于task_id模式'%{id}',有多个过滤器

时间:2018-04-06 22:29:51

标签: elasticsearch logstash

我在Linux上使用Elasticsearch 5.5.0和logstash 5.5.0 - AWS ec2-instance。

有一个驻留在/etc/logstash/conf.d中的logstash_etl.conf文件:

input {
     jdbc {
         jdbc_connection_string => "jdbc:mysql://localhost:3306/mydatabase"
         jdbc_user => "root"
         jdbc_password => ""
         jdbc_driver_library => "/etc/logstash/mysql-connector/mysql-connector-java-5.1.21.jar"
         jdbc_driver_class => "com.mysql.jdbc.driver"
         schedule => "*/5 * * * *"
         statement => "select * from customers"
         use_column_value => false
         clean_run => true
     }
  }

 filter {
    if ([api_key]) {
      aggregate {
        task_id => "%{id}"
        push_map_as_event_on_timeout => false
        #timeout_task_id_field => "[@metadata][index_id]"
        #timeout => 60 
        #inactivity_timeout => 30
        code => "sample code"
        timeout_code => "sample code"
      }
    }
  }

  # sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-exec
  output {
     if ([purge_task] == "yes") {
       exec {
           command => "curl -XPOST '127.0.0.1:9200/_all/_delete_by_query?conflicts=proceed' -H 'Content-Type: application/json' -d'
               {
                 \"query\": {
                   \"range\" : {
                     \"@timestamp\" : {
                       \"lte\" : \"now-3h\"
                     }
                   }
                 }
               }
           '"
       }
     } else {
         stdout { codec => json_lines}
         elasticsearch {
            "hosts" => "127.0.0.1:9200"
            "index" => "myindex_%{api_key}"
            "document_type" => "%{[@metadata][index_type]}"
            "document_id" => "%{[@metadata][index_id]}"
            "doc_as_upsert" => true
            "action" => "update"
            "retry_on_conflict" => 7
         }
     }
  }

当我重新启动logstash时:

sudo initctl restart logstash

在/var/log/logstash/logstash-plain.log中 - 一切都可以实现Elasticsearch的实际索引!

但是,如果我在此配置文件中添加另一个SQL输入:

input {
     jdbc {
         jdbc_connection_string => "jdbc:mysql://localhost:3306/mydatabase"
         jdbc_user => "root"
         jdbc_password => ""
         jdbc_driver_library => "/etc/logstash/mysql-connector/mysql-connector-java-5.1.21.jar"
         jdbc_driver_class => "com.mysql.jdbc.driver"
         schedule => "*/5 * * * *"
         statement => "select * from orders"
         use_column_value => false
         clean_run => true
     }
  }

由于配置文件中的错误,索引会停止!

在/var/log/logstash/logstash-plain.log中:

[2018-04-06T21:33:54,123][ERROR][logstash.agent ] Pipeline aborted due to error {:exception=>#<LogStash::ConfigurationError: Aggregate plugin: For task_id pattern '%{id}', there are more than one filter which defines timeout options. All timeout options have to be defined in only one aggregate filter per task_id pattern. Timeout options are : timeout, inactivity_timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_task_id_field, timeout_tags>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-aggregate-2.6.1/lib/logstash/filters/aggregate.rb:486:in `register'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-aggregate-2.6.1/lib/logstash/filters/aggregate.rb:480:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:281:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:292:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:292:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:302:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:226:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]}
[2018-04-06T21:33:54,146][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-04-06T21:33:57,131][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}

对于logstash和Elasticsearch来说真的很新......

这是什么意思?

如果有人能告诉我为什么只是通过添加一个新输入导致此工具崩溃,我将不胜感激?!

1 个答案:

答案 0 :(得分:1)

  

如果有人能告诉我为什么只是通过添加一个新输入导致此工具崩溃,我将不胜感激?!

您无法在同一配置中添加两个input语句。与documentation says类似,如果要在配置文件中添加多个input,则应使用类似的内容:

input {
  file {
    path => "/var/log/messages"
    type => "syslog"
  }

  file {
    path => "/var/log/apache/access.log"
    type => "apache"
  }
}