我想从Docker化ELK堆栈中设置的Logstash解析[存储在S3存储桶中]的AWS ELB日志。
我添加了我的logstash配置文件[并注释掉了所有其他]:
# AWS ELB configuration file
ADD ./aws_elb_logs.conf /etc/logstash/conf.d/aws_elb_logs.conf
配置文件如下:
input {
s3 {
# Logging_user AWS creds
access_key_id => "fjnsdfjnsdjfnjsdn"
secret_access_key => "asdfsdfsdfsdfsdfsdfsdfsd"
bucket => "elb-access-logs"
region => "us-west-2"
# keep track of the last processed file
sincedb_path => "./last-s3-file"
codec => "json"
type => "elb"
}
}
filter {
if [type] == "elb" {
grok {
match => [ 'message', '%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:loadbalancer} %{IP:client_ip}:%{NUMBER:client_port:int} (?:%{IP:backend_ip}:%{NUMBER:backend_port:int}|-) %{NUMBER:request_processing_time:float} %{NUMBER:backend_processing_time:float} %{NUMBER:response_processing_time:float} (?:%{NUMBER:elb_status_code:int}|-) (?:%{NUMBER:backend_status_code:int}|-) %{NUMBER:received_bytes:int} %{NUMBER:sent_bytes:int} "(?:%{WORD:verb}|-) (?:%{GREEDYDATA:request}|-) (?:HTTP/%{NUMBER:httpversion}|-( )?)" "%{DATA:userAgent}"( %{NOTSPACE:ssl_cipher} %{NOTSPACE:ssl_protocol})?' ]
}
grok {
match => [ "request", "%{URIPROTO:http_protocol}" ]
}
geoip {
source => "client_ip"
target => "geoip"
add_tag => [ "geoip" ]
}
useragent {
source => "userAgent"
}
date {
match => ["timestamp", "ISO8601"]
}
}
}
output {
elasticsearch {
hosts => localhost
port => "9200"
index => "logstash-%{+YYYY.MM.dd}"
}
stdout {
debug => true
}
}
当我创建容器时,我从Logstash获得以下错误日志:
==> /var/log/logstash/logstash.log <==
{:timestamp=>"2016-10-18T13:04:40.798000+0000", :message=>"Pipeline aborted due to error", :exception=>"LogStash::ConfigurationError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/config/mixin.rb:88:in `config_init'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/config/mixin.rb:72:in `config_init'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/outputs/base.rb:79:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/output_delegator.rb:74:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:181:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:181:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/agent.rb:491:in `start_pipeline'"], :level=>:error}
{:timestamp=>"2016-10-18T13:04:43.801000+0000", :message=>"stopping pipeline", :id=>"main"}
我无法理解是/正在/做错了什么!
欢迎任何指示......
编辑:
现在有这样的:
==> /var/log/logstash/logstash.log <==
{:timestamp=>"2016-10-18T14:26:50.492000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::S3 access_key_id=>\"gsfgdfgdfgdfgdfg\", secret_access_key=>\"dsfgsdfgsdgsdfgsdfg\", bucket=>\"elb-access-logs-dr\", region=>\"us-west-2\", sincedb_path=>\"./last-s3-file\", codec=><LogStash::Codecs::JSON charset=>\"UTF-8\">, type=>\"elb\", use_ssl=>true, delete=>false, interval=>60, temporary_directory=>\"/opt/logstash/logstash\">\n Error: The request signature we calculated does not match the signature you provided. Check your key and signing method.", :level=>:error}
答案 0 :(得分:1)
如果您正在使用其中一个带有logstash版本的容器&gt; 2,您对elasticsearch输出插件的配置是错误的来源。使用logstash版本2时,删除了配置选项port
,使用hosts
选项(cf doc)中的主机配置了端口。