弹性搜索的索引性能差

时间:2015-12-24 08:07:48

标签: performance elasticsearch logstash elastic-stack

目前,我正在使用弹性搜索来存储和查询某些日志。我们建立了一个五节点弹性搜索集群。其中有两个索引节点和三个查询节点。在索引节点中,我们在两台服务器上都有redis,logstash和elasticsearch。 elasticsearch使用NFS存储作为数据存储。我们的要求是索引300个日志条目/秒。但是我从elasticsearch获得的最佳性能只有25个日志条目/秒! 弹性搜索的XMX是16G。 每个组件的版本:

Redis: 2.8.12
logstash: 1.4.2
elasticsearch: 1.5.0

我们当前的索引设置如下:

     {
      "userlog" : {
        "settings" : {
          "index" : {
            "index" : {
              "store" : {
                "type" : "mmapfs"
              },
              "translog" : {
                "flush_threshold_ops" : "50000"
              }
            },
            "number_of_replicas" : "1",
            "translog" : {
              "flush_threshold_size" : "1G",
              "durability" : "async"
            },
            "merge" : {
              "scheduler" : {
                "max_thread_count" : "1"
              }
            },
            "indexing" : {
              "slowlog" : {
                "threshold" : {
                  "index" : {
                    "trace" : "2s",
                    "info" : "5s"
                  }
                }
              }
            },
            "memory" : {
              "index_buffer_size" : "3G"
            },
            "refresh_interval" : "30s",
            "version" : {
              "created" : "1050099"
            },
            "creation_date" : "1447730702943",
            "search" : {
              "slowlog" : {
                "threshold" : {
                  "fetch" : {
                    "debug" : "500ms"
                  },
                  "query" : {
                    "warn" : "10s",
                    "trace" : "1s"
                  }
                }
              }
            },
            "indices" : {
              "memory" : {
                "index_buffer_size" : "30%"
              }
            },
            "uuid" : "E1ttme3fSxKVD5kRHEr_MA",
            "index_currency" : "32",
            "number_of_shards" : "5"
          }
        }
      }
    }

这是我的logstash配置:

    input {
            redis {
                    host => "eanprduserreporedis01.eao.abn-iad.ea.com"
                    port => "6379"
                    type => "redis-input"
                    data_type => "list"
                    key => "userLog"
                    threads => 15
            }
        # Second reids block begin
            redis {
                    host => "eanprduserreporedis02.eao.abn-iad.ea.com"
                    port => "6379"
                    type => "redis-input"
                    data_type => "list"
                    key => "userLog"
                    threads => 15
            }
            # Second reids block end
    }

    output {
            elasticsearch {
                    cluster => "customizedlog_prod"
                    index => "userlog"
                    workers => 30
            }
           stdout{}
    }

一个非常奇怪的事情是,虽然目前索引速度只有~20 / s,但IO等待率却非常高,几乎达到70%。而且大多是读取流量。通过nfsiostat,当前读取速度约为200Mbps!所以基本上,为了索引每个日志条目,它将读取大约10Mbits的数据,这是疯狂的,因为我们的日志条目的平均长度小于10K。 所以,我采用了弹性搜索的jstack转储,这是一个RUNNING线程的结果:

    "elasticsearch[somestupidhostname][bulk][T#3]" daemon prio=10 tid=0x00007f230c109800 nid=0x79f6 runnable [0x00007f1ba85f0000]
       java.lang.Thread.State: RUNNABLE
            at sun.nio.ch.FileDispatcherImpl.pread0(Native Method)
            at sun.nio.ch.FileDispatcherImpl.pread(FileDispatcherImpl.java:52)
            at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:220)
            at sun.nio.ch.IOUtil.read(IOUtil.java:197)
            at sun.nio.ch.FileChannelImpl.readInternal(FileChannelImpl.java:730)
            at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:715)
            at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.readInternal(NIOFSDirectory.java:179)
            at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:342)
            at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
            at org.apache.lucene.store.DataInput.readVInt(DataInput.java:122)
            at org.apache.lucene.store.BufferedIndexInput.readVInt(BufferedIndexInput.java:221)
            at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock(SegmentTermsEnumFrame.java:152)
            at org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekExact(SegmentTermsEnum.java:506)
            at org.elasticsearch.common.lucene.uid.PerThreadIDAndVersionLookup.lookup(PerThreadIDAndVersionLookup.java:104)
            at org.elasticsearch.common.lucene.uid.Versions.loadDocIdAndVersion(Versions.java:150)
            at org.elasticsearch.common.lucene.uid.Versions.loadVersion(Versions.java:161)
            at org.elasticsearch.index.engine.InternalEngine.loadCurrentVersionFromIndex(InternalEngine.java:1002)
            at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:277)
            - locked <0x00000005fc76b938> (a java.lang.Object)
            at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:256)
            at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:455)
            at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:437)
            at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:149)
            at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:515)
            at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:422)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
            at java.lang.Thread.run(Thread.java:745)

有谁能告诉我弹性搜索在做什么以及为什么索引这么慢?是否有可能改善它?

1 个答案:

答案 0 :(得分:0)

可能不会对您的糟糕表现负全部责任,但请查看redis的batch_size选项。如果您一次从redis中提取超过1份文档,我敢打赌它会变得更好。