弹性搜索中的索引速度非常慢

时间:2017-01-26 18:49:31

标签: elasticsearch logstash

无论我做什么,我都无法将索引增加到10000以上/秒。我在一个logstash实例中从kafka获得大约每秒13000个事件。我在不同的机器上运行3个Logstash,从同一个kafka主题读取数据。

我已经设置了一个ELK集群,其中包含来自Kafka的3个Logstash读取数据,并将它们发送到我的弹性集群。

我的群集包含3个Logstash,3个Elastic Master节点,3个Elastic Client节点和50个Elastic Data Node。

Logstash 2.0.4
Elastic Search 5.0.2
Kibana 5.0.2

所有Citrix VM具有相同的配置:

  

Red Hat Linux-7
  英特尔(R)Xeon(R)CPU E5-2630 v3 @ 2.40GHz 6核心
  32 GB RAM
  2 TB纺丝介质

Logstash配置文件:

 output {
    elasticsearch {
      hosts => ["dataNode1:9200","dataNode2:9200","dataNode3:9200" upto "**dataNode50**:9200"]
      index => "logstash-applogs-%{+YYYY.MM.dd}-1"
      workers => 6
      user => "uname"
      password => "pwd"
    }
}

Elasticsearch数据节点的 elastcisearch.yml 文件:

 cluster.name: my-cluster-name
 node.name: node46-data-46
 node.master: false
 node.data: true
 bootstrap.memory_lock: true
 path.data: /apps/dataES1/data
 path.logs: /apps/dataES1/logs
 discovery.zen.ping.unicast.hosts: ["master1","master2","master3"]
 network.host: hostname
 http.port: 9200

The only change that I made in my **jvm.options** file is

-Xms15g
-Xmx15g

我所做的系统配置更改如下:

vm.max_map_count=262144

在/etc/security/limits.conf中我添加了:

elastic       soft    nofile          65536
elastic       hard    nofile          65536
elastic       soft    memlock         unlimited
elastic       hard    memlock         unlimited
elastic       soft    nproc     65536
elastic       hard    nproc     unlimited

索引率

enter image description here

enter image description here

其中一个活动数据节点:

$ sudo iotop -o

Total DISK READ :       0.00 B/s | Total DISK WRITE :     243.29 K/s
Actual DISK READ:       0.00 B/s | Actual DISK WRITE:     357.09 K/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
 5199 be/3 root        0.00 B/s    3.92 K/s  0.00 %  1.05 % [jbd2/xvdb1-8]
14079 be/4 elkadmin    0.00 B/s   51.01 K/s  0.00 %  0.53 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13936 be/4 elkadmin    0.00 B/s   51.01 K/s  0.00 %  0.39 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13857 be/4 elkadmin    0.00 B/s   58.86 K/s  0.00 %  0.34 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13960 be/4 elkadmin    0.00 B/s   35.32 K/s  0.00 %  0.33 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13964 be/4 elkadmin    0.00 B/s   31.39 K/s  0.00 %  0.27 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
14078 be/4 elkadmin    0.00 B/s   11.77 K/s  0.00 %  0.00 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch

enter image description here

索引详细信息:

index                         shard prirep state       docs  store
logstash-applogs-2017.01.23-3 11    r      STARTED 30528186   35gb
logstash-applogs-2017.01.23-3 11    p      STARTED 30528186 30.3gb
logstash-applogs-2017.01.23-3 9     p      STARTED 30530585 35.2gb
logstash-applogs-2017.01.23-3 9     r      STARTED 30530585 30.5gb
logstash-applogs-2017.01.23-3 1     r      STARTED 30526639 30.4gb
logstash-applogs-2017.01.23-3 1     p      STARTED 30526668 30.5gb
logstash-applogs-2017.01.23-3 14    p      STARTED 30539209 35.5gb
logstash-applogs-2017.01.23-3 14    r      STARTED 30539209   35gb
logstash-applogs-2017.01.23-3 12    p      STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 12    r      STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 15    p      STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 15    r      STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 19    r      STARTED 30533725 35.3gb
logstash-applogs-2017.01.23-3 19    p      STARTED 30533725 36.4gb
logstash-applogs-2017.01.23-3 18    r      STARTED 30525190 30.2gb
logstash-applogs-2017.01.23-3 18    p      STARTED 30525190 30.3gb
logstash-applogs-2017.01.23-3 8     p      STARTED 30526785 35.8gb
logstash-applogs-2017.01.23-3 8     r      STARTED 30526785 35.3gb
logstash-applogs-2017.01.23-3 3     p      STARTED 30526960 30.4gb
logstash-applogs-2017.01.23-3 3     r      STARTED 30526960 30.2gb
logstash-applogs-2017.01.23-3 5     p      STARTED 30522469 35.3gb
logstash-applogs-2017.01.23-3 5     r      STARTED 30522469 30.8gb
logstash-applogs-2017.01.23-3 6     p      STARTED 30539580 30.9gb
logstash-applogs-2017.01.23-3 6     r      STARTED 30539580 30.3gb
logstash-applogs-2017.01.23-3 7     p      STARTED 30535488 30.3gb
logstash-applogs-2017.01.23-3 7     r      STARTED 30535488 30.4gb
logstash-applogs-2017.01.23-3 2     p      STARTED 30524575 35.2gb
logstash-applogs-2017.01.23-3 2     r      STARTED 30524575 35.3gb
logstash-applogs-2017.01.23-3 10    p      STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 10    r      STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 16    p      STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 16    r      STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 4     r      STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 4     p      STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 17    r      STARTED 30528132 30.2gb
logstash-applogs-2017.01.23-3 17    p      STARTED 30528132 30.4gb
logstash-applogs-2017.01.23-3 13    r      STARTED 30521873 30.3gb
logstash-applogs-2017.01.23-3 13    p      STARTED 30521873 30.4gb
logstash-applogs-2017.01.23-3 0     r      STARTED 30520172 30.4gb
logstash-applogs-2017.01.23-3 0     p      STARTED 30520172 30.5gb

我通过将数据转储到文件中来测试logstash中的传入数据。我在30秒内得到了一个290 MB的文件,377822行。所以Kafka没有问题,因为在我的3个Logstash服务器中,我每秒接收35000个事件,但我的Elasticsearch能够每秒最多索引10000个事件。

有人可以帮我解决这个问题吗?

编辑:我尝试批量发送请求125,然后是500,1000,10000,但我的索引速度仍未得到任何改善。

1 个答案:

答案 0 :(得分:0)

我通过迁移到更大的数据机器节点来提高索引率。

数据节点:具有以下配置的VMWare虚拟机:

14 CPU @ 2.60GHz
64GB RAM, 31GB dedicated for elasticsearch.

我可以使用的禁用磁盘是带有光纤通道的SAN,因为我无法获得任何SSD或本地磁盘。

我实现了每秒 100,000 事件的最大索引率。每个文档大小约为2到5 KB。