这个Logstash配置有什么问题?

时间:2016-07-05 15:00:08

标签: amazon-s3 logstash apache-kafka

我们正在使用Logstash(2.3.3)在Kafka中使用Kafka的new plugin (3.0.2)收听多个主题。然后,基于主题名称(添加为元数据)将每个主题的数据重定向到S3存储桶中的特定文件夹。但是,使用当前配置,仅第一个S3输出的数据似乎正在其S3存储桶/文件夹中登陆。

有人能告诉我这里出了什么问题吗?我很确定有更好的方法来编写这个可以满足我们要求的配置!

input
{
 kafka
 {
  bootstrap_servers => "10.0.0.5:9093,10.0.1.5:9093"
  topics => "topic"
  codec => "json"
  ssl => true
  ssl_keystore_location => "/opt/logstash/ssl/server.keystore.jks"
  ssl_keystore_password => "<snipped>"
  ssl_truststore_location => "/opt/logstash/ssl/server.truststore.jks"
  ssl_truststore_password => "<snipped>"
  add_field => { "[@metadata][topic]" => "topic" }
 }
 kafka
 {
  bootstrap_servers => "10.0.0.5:9093,10.0.1.5:9093"
  topics => "topic-test"
  codec => "json"
  ssl => true
  ssl_keystore_location => "/opt/logstash/ssl/server.keystore.jks"
  ssl_keystore_password => "<snipped>"
  ssl_truststore_location => "/opt/logstash/ssl/server.truststore.jks"
  ssl_truststore_password => "<snipped>"
  add_field => { "[@metadata][topic]" => "topic-test" }
 }
 kafka
 {
  bootstrap_servers => "10.0.0.5:9093,10.0.1.5:9093"
  topics => "daily_batch"  
  ssl => true
  ssl_keystore_location => "/opt/logstash/ssl/server.keystore.jks"
  ssl_keystore_password => "<snipped>"
  ssl_truststore_location => "/opt/logstash/ssl/server.truststore.jks"
  ssl_truststore_password => "<snipped>"
  add_field => { "[@metadata][topic]" => "daily_batch" }
 }
}

output
{
 if [@metadata][topic] == "topic"
 {
  s3
    {
     region => "us-east-1"
     bucket => "our-s3-storage/topic"
     size_file => 20971520
     temporary_directory => "/logstash"
     use_ssl => "true"
     codec => json_lines     
    }
 }
 if [@metadata][topic] == "topic-test"
 {
  s3
    {
     region => "us-east-1"
     bucket => "our-s3-storage/topic-test"
     size_file => 2097152
     temporary_directory => "/logstash"
     use_ssl => "true"
     codec => json_lines     
    }
 }
 if [@metadata][topic] == "daily_batch"
 {
  s3
    {
     region => "us-east-1"
     bucket => "our-s3-storage/daily_batch"
     size_file => 41943
     temporary_directory => "/logstash"
     use_ssl => "true"
    }
 }
}

1 个答案:

答案 0 :(得分:0)

在Logstash 5.0中,您将能够使用topics并为您的kafka输入提供一系列主题并具有

topics => ["topic", "topic-test", "daily_batch"]

在一个kafka输入中。但是,使用logstash 2.3无法做到这一点,而logstash 2.3没有topics字段。

通过使用logstash能够在每个事件的基础上将字段值插入到配置中的字符串中,您绝对可以压缩输出。为了确保您的数据在坏数据上没有奇怪的一次性存储桶,您可以使用数组检查是否。

if [@metadata][topic] in ["topic", "topic-test", "daily_batch"]
 {
  s3
    {
     region => "us-east-1"
     bucket => "our-s3-storage/%{[@metadata][topic]}"
     size_file => 41943
     temporary_directory => "/logstash"
     use_ssl => "true"
    }
 }
}