Logstash配置文件出错(?)(带过滤器和grok)

时间:2015-08-18 02:37:00

标签: python logstash grok logstash-grok logstash-configuration

我的日志文件是:

Jan 1 22:54:17 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 70.77.116.190; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2612;
Jan 1 22:54:22 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 61.164.41.144; dst: %DSTIP%; proto: udp; product: VPN-1 & FireWall-1; service: 5060; s_port: 5069;
Jan 1 22:54:23 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 69.55.245.136; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2970;
Jan 1 22:54:41 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 95.104.65.30; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2565;
Jan 1 22:54:43 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 222.186.24.11; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 2967; s_port: 6000;
Jan 1 22:54:54 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 74.204.108.202; dst: %DSTIP%; proto: udp; product: VPN-1 & FireWall-1; service: 137; s_port: 53038;
Jan 1 22:55:10 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 71.111.186.26; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 38548;
Jan 1 23:02:56 accept %LOGSOURCE% >eth1 inzone: External; outzone: Local; rule: 3; rule_uid: {723F81EF-75C9-4CBB-8913-0EBB3686E0F7}; service_id: icmp-proto; ICMP: Echo Request; src: 24.188.22.101; dst: %DSTIP%; proto:

这是我运行的配置文件:

input {
  file {
      path => "/etc/logstash/external_noise.log"
      type => "external_noise"
      start_position => "beginning"
      sincedb_path => "/dev/null"
  }
}
  filter {

    grok {
      match => [ 'message', '%{CISCOTIMESTAMP:timestamp} %{WORD:action} %{SPACE} %{DATA:logsource} %{DATA:interface} %{GREEDYDATA:kvpairs}' ]
     }
    kv   {
       source => "kvpairs"
       field_split => ";"
}

}
    output {
elasticsearch {
    action => "index"
    host => "localhost"
    index => "noise-%{+dd.MM.YYYY}"
    workers => 1
    }
 }

在我的Kibana中,我的字段与我指定的字段有些不同。此外,它的时间戳是我使用配置文件启动我的logstash的时间。

有一个字段
message: Jan 1 22:54:17 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 70.77.116.190; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2612;

从我的格言中,我已经过滤了它。我是否需要变异才能添加字段?对不起,我不是ELK的专家,我很想知道并了解更多信息。

1 个答案:

答案 0 :(得分:0)

正如您other question中所述,您需要进行一些调整。但是,你可以自己想出来。

如果这是输入(从您的问题中复制):

Jan 1 22:54:17 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 70.77.116.190; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2612;
Jan 1 22:54:22 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 61.164.41.144; dst: %DSTIP%; proto: udp; product: VPN-1 & FireWall-1; service: 5060; s_port: 5069;
Jan 1 22:54:23 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 69.55.245.136; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2970;
Jan 1 22:54:41 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 95.104.65.30; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 2565;
Jan 1 22:54:43 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 222.186.24.11; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 2967; s_port: 6000;
Jan 1 22:54:54 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 74.204.108.202; dst: %DSTIP%; proto: udp; product: VPN-1 & FireWall-1; service: 137; s_port: 53038;
Jan 1 22:55:10 drop   %LOGSOURCE% >eth1 rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 71.111.186.26; dst: %DSTIP%; proto: tcp; product: VPN-1 & FireWall-1; service: 445; s_port: 38548;
Jan 1 23:02:56 accept %LOGSOURCE% >eth1 inzone: External; outzone: Local; rule: 3; rule_uid: {723F81EF-75C9-4CBB-8913-0EBB3686E0F7}; service_id: icmp-proto; ICMP: Echo Request; src: 24.188.22.101; dst: %DSTIP%; proto:

这是您的过滤器部分:

filter {
    grok {
            match => [ "message", "%{CISCOTIMESTAMP:timestamp} %{WORD:action}%{SPACE}%{DATA:logsource} %{DATA:interface} %{GREEDYDATA:kvpairs}" ]
         }
    kv   {
            source => "kvpairs"
            field_split => ";"
            value_split => ":"
    }
}

然后这是输出的一部分:

     "timestamp" => "Jan 1 23:02:56"
        "action" => "drop",
     "logsource" => "%LOGSOURCE%",
     "interface" => ">eth1",
       "kvpairs" => "rule: 7; rule_uid: {C1336766-9489-4049-9817-50584D83A245}; src: 74.204.108.202; dst: %DSTIP%; proto: udp; product: VPN-1 & FireWall-1; service: 137; s_port: 53038;",
          "rule" => " 7",
     " rule_uid" => " {C1336766-9489-4049-9817-50584D83A245}",
          " src" => " 74.204.108.202",
          " dst" => " %DSTIP%",
        " proto" => " udp",
      " product" => " VPN-1 & FireWall-1",
      " service" => " 137",
       " s_port" => " 53038"

这适用于所有给定的日志行。我测试过了。 (务必删除grok模式中%{SPACE}周围的空格。)

如果要删除输出中的kvpairs字段,请在kv过滤器中添加一行:

remove_field => "kvpairs"

如果您想覆盖logstash的@timestamp,请添加date filter

date {
    match => [ "timestamp", "MMM dd HH:mm:ss" ]
}