自定义Kibana中的日志输出

时间:2019-02-27 22:06:13

标签: logstash elastic-stack logstash-grok filebeat

最后,我使用了ELK堆栈来从远程服务器获取一些日志。但是,我想自定义日志的输出。有没有一种方法可以删除一些我用黄色突出显示的字段:

enter image description here

我试图从logstash.conf中的包括remove_field的_source中删除它们:

#include <graphics.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <conio.h>
#include <time.h>
#include <stdlib.h>

void startScreen() {
   settextstyle(EUROPEAN_FONT, 0, 16);
   outtextxy((getmaxx() / 2) - (textwidth("TOWER OF POWER") / 2), 0, "TOWER OF POWER");
   outtextxy((getmaxx() / 2) - (textwidth("PRESS ANY KEY")/2), getmaxy() - textheight("PRESS ANY KEY"), "PRESS ANY KEY");

   tower();

   readkey();
   clearviewport();
}

void main(void) {
    // Initialise graphic window (x = 639 , y = 479)
    int gd = DETECT, gm = 0;
    initgraph(&gd, &gm, "");
    startScreen();
}

您知道如何摆脱_source中来自filebeat的日志的黄色字段吗?

基于Leandro注释更新logstash.conf:

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/..."
    ssl_key => "/..logstash.key"
  }
}

filter {
        grok {
            match => {
                "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"
            }
            remove_field => [ "tags", "prospector.type", "host.architecture", "host.containerized", "host.id", "host.os.platform", "host.os.family" ]
        }
}

output {
    elasticsearch {
        hosts => "localhost:9200"
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    }
}

在日志中:

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => ".../logstash.crt"
    ssl_key => ".../logstash.key"
  }
}

filter {
        grok {
            match => {
                "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}"
            }
            remove_field => [ "tags","[prospector][type]","[host][architecture]", "[host][containerized]", "[host][id]", "[host][os][platform]", "[host][os][family]", "[beat][hostname]", "[beat][name]", "[beat][version], "[offset]", "[input][type]", "[meta][cloud][provider]", "[meta][cloud][machine_type]", "[meta][cloud][instance_id]"]
        }
}



output {
    elasticsearch {
        hosts => "localhost:9200"
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    }
}

谢谢

1 个答案:

答案 0 :(得分:1)

其中一些字段是嵌套字段,在Logstash过滤器中访问它们的方法是使用[field][subfield]表示法。

您的remove_field应该是这样的:

remove_field => ["tags","[host][architecture]","[meta][cloud][provider]"]

但是我认为您无法删除@version字段。

更新:

使用Filebeat日志中的事件示例,我模拟了一个管道并获得了_grokparsefailure,即使grok失败,也要删除字段,您需要在remove_field内使用mutate过滤器:

filter {
  grok {
     your grok
  }
  mutate {
    remove_field => ["[prospector]","[host][architecture]", "[host][containerized]", "[host][id]", "[host][os][platform]", "[host][os][family]", "[beat]", "[offset]", "[input]", "[meta]"]
  }
}

在解决问题之前,请勿删除tags字段。

该示例的logstash输出为:

{
  "source": "/logs/api.log",
  "tags": [
    "_grokparsefailure"
  ],
  "@timestamp": "2019-02-28T01:03:41.647Z",
  "message": "2018-09-14 20:23:37 INFO  ContextLoader:272 - Root WebApplicationContext: initialization started",
  "log": {
    "file": {
      "path": "/logs/api.log"
    }
  },
  "@version": "1",
  "host": {
    "os": {
      "codename": "Core",
      "version": "7 (Core)",
      "name": "CentOS Linux"
    },
    "name": "tomcat"
  }
}