输入json到logstash - 配置问题?

时间:2014-10-30 23:57:59

标签: elasticsearch logstash

我有以下json输入,我想转储到logstash(并最终在elasticsearch / kibana中搜索/仪表板)。

{"vulnerabilities":[
    {"ip":"10.1.1.1","dns":"z.acme.com","vid":"12345"},
    {"ip":"10.1.1.2","dns":"y.acme.com","vid":"12345"},
    {"ip":"10.1.1.3","dns":"x.acme.com","vid":"12345"}
]}

我正在使用以下logstash配置

input {
  file {
    path => "/tmp/logdump/*"
    type => "assets"
    codec => "json"
  }
}
output {
  stdout { codec => rubydebug }
  elasticsearch { host => localhost }
}

输出

{
       "message" => "{\"vulnerabilities\":[\r",
      "@version" => "1",
    "@timestamp" => "2014-10-30T23:41:19.788Z",
          "type" => "assets",
          "host" => "av12612sn00-pn9",
          "path" => "/tmp/logdump/stack3.json"
}
{
       "message" => "{\"ip\":\"10.1.1.30\",\"dns\":\"z.acme.com\",\"vid\":\"12345\"},\r",
      "@version" => "1",
    "@timestamp" => "2014-10-30T23:41:19.838Z",
          "type" => "assets",
          "host" => "av12612sn00-pn9",
          "path" => "/tmp/logdump/stack3.json"
}
{
       "message" => "{\"ip\":\"10.1.1.31\",\"dns\":\"y.acme.com\",\"vid\":\"12345\"},\r",
      "@version" => "1",
    "@timestamp" => "2014-10-30T23:41:19.870Z",
          "type" => "shellshock",
          "host" => "av1261wag2sn00-pn9",
          "path" => "/tmp/logdump/stack3.json"
}
{
            "ip" => "10.1.1.32",
           "dns" => "x.acme.com",
           "vid" => "12345",
      "@version" => "1",
    "@timestamp" => "2014-10-30T23:41:19.884Z",
          "type" => "assets",
          "host" => "av12612sn00-pn9",
          "path" => "/tmp/logdump/stack3.json"
}

显然logstash将每一行视为一个事件,它认为{"vulnerabilities":[是一个事件,我猜测后续2个节点上的尾随逗号会破坏解析,最后一个节点显示为错误。我如何告诉logstash解析漏洞数组中的事件并忽略行尾的逗号?

更新时间:2014-11-05 根据Magnus的建议,我添加了json滤镜,它运行得很好。但是,如果不在文件输入块中指定start_position => "beginning",它将无法正确解析json的最后一行。任何想法为什么不呢?我知道它默认解析自下而上,但预计mutate / gsub会顺利处理这个问题吗?

file {
    path => "/tmp/logdump/*"
    type => "assets"
    start_position => "beginning"
  }
}
filter {
  if [message] =~ /^\[?{"ip":/ {
    mutate {
      gsub => [
        "message", "^\[{", "{",
        "message", "},?\]?$", "}"
      ]
    }
    json {
      source => "message"
      remove_field => ["message"]
    }
  }
}
output {
  stdout { codec => rubydebug }
  elasticsearch { host => localhost }
}

1 个答案:

答案 0 :(得分:5)

您可以跳过json编解码器并使用多行过滤器将消息连接到一个字符串,您可以将其输入到json filter.filter {

filter {
  multiline {
    pattern => '^{"vulnerabilities":\['
    negate => true
    what => "previous"
  }
  json {
    source => "message"
  }
}

但是,这会产生以下不需要的结果:

{
            "message" => "<omitted for brevity>",
           "@version" => "1",
         "@timestamp" => "2014-10-31T06:48:15.589Z",
               "host" => "name-of-your-host",
               "tags" => [
        [0] "multiline"
    ],
    "vulnerabilities" => [
        [0] {
             "ip" => "10.1.1.1",
            "dns" => "z.acme.com",
            "vid" => "12345"
        },
        [1] {
             "ip" => "10.1.1.2",
            "dns" => "y.acme.com",
            "vid" => "12345"
        },
        [2] {
             "ip" => "10.1.1.3",
            "dns" => "x.acme.com",
            "vid" => "12345"
        }
    ]
}

除非漏洞数组中包含固定数量的元素,否则我认为我们可以做很多事情(不使用红宝石过滤器)。

如何将json过滤器应用于看起来像我们想要的行并删除其余部分?您的问题并不清楚是否所有日志都是这样的,所以这可能不太有用。

filter {
  if [message] =~ /^\s+{"ip":/ {
    # Remove trailing commas
    mutate {
      gsub => ["message", ",$", ""]
    }
    json {
      source => "message"
      remove_field => ["message"]
    }
  } else {
    drop {}
  }
}