Elactic Search不从logstash中恢复数据

时间:2016-05-14 10:44:25

标签: elasticsearch logstash kibana elastic-stack logstash-configuration

我有一个ELastic Search服务器:

    {
  "cluster_name" : "elasticsearch",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 76,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 297,
  "active_shards" : 297,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 297,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0
}

它运行在具有30gb Ram的双核计算机上,并且正在接收来自3到4个logstash服务器的日志,并且总共有30个输入(所有日志存储服务器组合)但是对于大多数输入,日志正在被错过而我得到没有日志30-40分钟,我在logstash服务器如:retrying-failed-action-with-response-code-429得到警告。此外,弹性搜索服务器有一个非常高的ram使用率。日志文件的心跳非常高..我检查了gork patter他们是对的 这是我的conf文件之一:

 input {

  exec {
    command => "/usr/bin/touch /var/run/logstash-monit/input.touch && /bin/echo OK."
    interval => 60
    type => "heartbeat"
  }

  file {
    type => 'seller-forever'
   path => '/var/log/seller/seller.log'
   sincedb_path => "/opt/logstash/sincedb-access1"
    }
    }
filter {

grok {
   type => "seller-forever"
    match => [ "message", "%{GREEDYDATA:logline} %{GREEDYDATA:extra_fields}" ]
  }

geoip {
        add_tag => [ "GeoIP" ]
        database => "/opt/logstash/GeoLiteCity.dat"
        source => "clientip"
    }
    if [useragent] != "-" and [useragent] != "" {
      useragent {
        add_tag => [ "UA" ]
        source => "useragent"
      }
    }
    if [bytes] == 0 { mutate { remove => "[bytes]" } }
    if [geoip][city_name]      == "" { mutate { remove => "[geoip][city_name]" } }
    if [geoip][continent_code] == "" { mutate { remove => "[geoip][continent_code]" } }
    if [geoip][country_code2]  == "" { mutate { remove => "[geoip][country_code2]" } }
    if [geoip][country_code3]  == "" { mutate { remove => "[geoip][country_code3]" } }
    if [geoip][country_name]   == "" { mutate { remove => "[geoip][country_name]" } }
    if [geoip][latitude]       == "" { mutate { remove => "[geoip][latitude]" } }
    if [geoip][longitude]      == "" { mutate { remove => "[geoip][longitude]" } }
    if [geoip][postal_code]    == "" { mutate { remove => "[geoip][postal_code]" } }
    if [geoip][region_name]    == "" { mutate { remove => "[geoip][region_name]" } }
    if [geoip][time_zone]      == "" { mutate { remove => "[geoip][time_zone]" } }
    if [urlquery]              == "" { mutate { remove => "urlquery" } }

    if "apache_json" in [tags] {
       if [method]    =~ "(HEAD|OPTIONS)" { mutate { remove => "method" } }
        if [useragent] == "-"              { mutate { remove => "useragent" } }
        if [referer]   == "-"              { mutate { remove => "referer" } }
    }
    if "UA" in [tags] {
        if [device] == "Other" { mutate { remove => "device" } }
        if [name]   == "Other" { mutate { remove => "name" } }
        if [os]     == "Other" { mutate { remove => "os" } }
    }

}


output {

stdout { codec => rubydebug }

elasticsearch {
type => "seller-forever"
index => "seller-forever"
host => "10.0.0.89"
protocol => "node"
   }
}

我正在使用kibana进行可视化。 我该如何解决这个问题?任何帮助将不胜感激,我无法理解该怎么做。

1 个答案:

答案 0 :(得分:1)

您是否检查过Logstash和Elasticsearch日志?

另一方面,我重写了你的logstash配置,因为你使用的一些选项已经过时或者我的Logstash版本2.3.2已弃用。

我将mutate中的remove更改为remov_field(不推荐删除)。 我删除了协议,因为它已过时(节点是默认选项)。

键入grok并且elasticsearch已过时。 您在输入中输入的类型正确,因此Logstash会将其与您的文件一起发送。 如果您想根据过滤器中的特定类型执行某些操作。你需要使用这样的东西。

filter {
    if [type] == "apacheAccess" {
        grok {
            match => [ "message", "%{message}" ]
        }

您可以使用2个选项修复unassigned_shards。

  1. 您可以强制合并,以强制合并一个或多个索引。 curl -XPOST 'http://localhost:9200/_forcemerge' Elasticsearch Documentation: Force Merge
  2. 您可以将index.routing.allocation.disable_allocation设置为false。这将禁用分配。 curl -XPUT 'localhost:9200/_settings' \ -d '{"index.routing.allocation.disable_allocation": false}'