打破kibana中分析的字段

时间:2016-02-03 17:36:42

标签: json elasticsearch logstash kibana elastic-stack

我有一个ELK堆栈,它接收来自filebeat结构化的JSON日志,如下所示:

{"what": "Connected to proxy service", "who": "proxy.service", "when": "03.02.2016 13:29:51", "severity": "DEBUG", "more": {"host": "127.0.0.1", "port": 2004}}
{"what": "Service registered with discovery", "who": "proxy.discovery", "when": "03.02.2016 13:29:51", "severity": "DEBUG", "more": {"ctx": {"node": "igz0", "ip": "127.0.0.1:5301", "irn": "proxy"}, "irn": "igz0.proxy.827378e7-3b67-49ef-853c-242de033e645"}}
{"what": "Exception raised while setting service value", "who": "proxy.discovery", "when": "03.02.2016 13:46:34", "severity": "WARNING", "more": {"exc": "ConnectionRefusedError('Connection refused',)", "service": "igz0.proxy.827378e7-3b67-49ef-853c-242de033e645"}}

嵌套JSON的“more”字段被分解(不确定堆栈的哪个部分)到kibana中的不同字段(“more.host”,“more.ctx”等)。

这是我的节拍输入:

input {
  beats {
    port => 5044
  }
}
filter {
  if [type] == "node" {
    json {
      source => "message"
      add_field => {
        "who" => "%{name}"
        "what" => "%{msg}"
        "severity" => "%{level}"
        "when" => "%{time}"
      }
    }
  } else {
    json {
      source => "message"
    }
  }
  date {
    match => [ "when" , "dd.MM.yyyy HH:mm:ss", "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"]
  }
}

这是我的输出:

output {
  elasticsearch {
    hosts => ["localhost"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
  stdout { codec => rubydebug }
}

有没有办法制作一个包含整个“更多”字段而不会将其分开的字段?

1 个答案:

答案 0 :(得分:0)

您应该能够使用ruby过滤器来获取哈希并将其转换回字符串。

filter {
   ruby {
      code => "event['more'] = event['more'].to_s"
   }
}

您可能希望用if将其包围,以确保该字段首先存在。