Elasticsearch - 重新打开索引INDEX_REOPENED错误后未复制副本

时间:2018-03-29 15:42:16

标签: elasticsearch

我关闭了索引并重新打开它,现在我的分片无法分配。

curl -s -XGET localhost:9201/_cat/shards?h=index,shard,prirep,state,unassigned.reason | grep UNASSIGNED
2018.03.27-team-logs 2 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 5 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 3 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 4 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 1 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 0 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 2 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 5 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 3 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 4 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 1 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 0 r UNASSIGNED INDEX_REOPENED

有人可以解释一下错误意味着什么以及如何解决它?在我关闭它之前一切正常。我配置了6个分片和1个副本。已安装Elasticsearch 6.2。

修改

curl -XGET "localhost:9201/_cat/shards"的输出:

2018.03.29-team-logs 1 r STARTED    1739969 206.2mb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
2018.03.29-team-logs 1 p STARTED    1739969   173mb 10.206.46.246 elk-es-data-hot-2.platform.osdc1.mall.local
2018.03.29-team-logs 2 p STARTED    1739414 169.9mb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
2018.03.29-team-logs 2 r STARTED    1739414 176.3mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.29-team-logs 4 p STARTED    1740185   186mb 10.206.46.247 elk-es-data-hot-1.platform.osdc1.mall.local
2018.03.29-team-logs 4 r STARTED    1740185 169.4mb 10.206.46.246 elk-es-data-hot-2.platform.osdc1.mall.local
2018.03.29-team-logs 5 r STARTED    1739660 164.3mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.29-team-logs 5 p STARTED    1739660 180.1mb 10.206.46.246 elk-es-data-hot-2.platform.osdc1.mall.local
2018.03.29-team-logs 3 p STARTED    1740606 171.2mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.29-team-logs 3 r STARTED    1740606 173.4mb 10.206.46.247 elk-es-data-hot-1.platform.osdc1.mall.local
2018.03.29-team-logs 0 r STARTED    1740166 169.7mb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
2018.03.29-team-logs 0 p STARTED    1740166   187mb 10.206.46.247 elk-es-data-hot-1.platform.osdc1.mall.local
2018.03.28-team-logs 1 p STARTED    2075020 194.2mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.28-team-logs 1 r UNASSIGNED                               
2018.03.28-team-logs 2 p STARTED    2076268 194.9mb 10.206.46.247 elk-es-data-hot-1.platform.osdc1.mall.local
2018.03.28-team-logs 2 r UNASSIGNED                               
2018.03.28-team-logs 4 p STARTED    2073906 194.9mb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
2018.03.28-team-logs 4 r UNASSIGNED                               
2018.03.28-team-logs 5 p STARTED    2072921   195mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.28-team-logs 5 r UNASSIGNED                               
2018.03.28-team-logs 3 p STARTED    2074579 194.1mb 10.206.46.246 elk-es-data-hot-2.platform.osdc1.mall.local
2018.03.28-team-logs 3 r UNASSIGNED                               
2018.03.28-team-logs 0 p STARTED    2073349 193.9mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.28-team-logs 0 r UNASSIGNED                               
2018.03.27-team-logs 1 p STARTED     356769  33.5mb 10.207.46.246 elk-es-data-warm-1.platform.osdc2.mall.local
2018.03.27-team-logs 1 r UNASSIGNED                               
2018.03.27-team-logs 2 p STARTED     356798  33.6mb 10.206.46.244 elk-es-data-warm-2.platform.osdc1.mall.local
2018.03.27-team-logs 2 r UNASSIGNED                               
2018.03.27-team-logs 4 p STARTED     356747  33.7mb 10.207.46.246 elk-es-data-warm-1.platform.osdc2.mall.local
2018.03.27-team-logs 4 r UNASSIGNED                               
2018.03.27-team-logs 5 p STARTED     357399  33.8mb 10.207.46.245 elk-es-data-warm-2.platform.osdc2.mall.local
2018.03.27-team-logs 5 r UNASSIGNED                               
2018.03.27-team-logs 3 p STARTED     357957  33.7mb 10.206.46.245 elk-es-data-warm-1.platform.osdc1.mall.local
2018.03.27-team-logs 3 r UNASSIGNED                               
2018.03.27-team-logs 0 p STARTED     356357  33.4mb 10.207.46.245 elk-es-data-warm-2.platform.osdc2.mall.local
2018.03.27-team-logs 0 r UNASSIGNED                               
.kibana                  0 p STARTED          2  12.3kb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
.kibana                  0 r UNASSIGNED

curl -XGET "localhost:9201/_cat/nodes"的输出:

10.207.46.248  8 82 0 0.07 0.08 0.11 d - elk-es-data-hot-2
10.206.46.245  9 64 0 0.04 0.11 0.08 d - elk-es-data-warm-1
10.207.46.249 11 90 0 0.00 0.01 0.05 m * elk-es-master-2
10.207.46.245  9 64 0 0.00 0.01 0.05 d - elk-es-data-warm-2
10.206.46.247 12 82 0 0.00 0.06 0.08 d - elk-es-data-hot-1
10.206.46.244 10 64 0 0.08 0.04 0.05 d - elk-es-data-warm-2
10.207.46.243  5 86 0 0.00 0.01 0.05 d - elk-kibana
10.206.46.248 10 92 1 0.04 0.18 0.24 m - elk-es-master-1
10.206.46.246  6 82 0 0.02 0.07 0.09 d - elk-es-data-hot-2
10.207.46.247  9 82 0 0.06 0.06 0.05 d - elk-es-data-hot-1
10.206.46.241  6 91 0 0.00 0.02 0.05 m - master-test
10.206.46.242  8 89 0 0.00 0.02 0.05 d - es-kibana
10.207.46.246  8 64 0 0.00 0.02 0.05 d - elk-es-data-warm-1

1 个答案:

答案 0 :(得分:1)

这是预期的行为。

Elasticsearch不会将主分片和副本分片放在同一个分片上 节点。您将需要至少2个节点才能拥有1个副本。

您只需将副本设置为0

即可
PUT */_settings
{
    "index" : {
    "number_of_replicas" : 0
    }
}

更新:

运行以下请求后

GET /_cluster/allocation/explain?pretty

我们可以在这里看到回复

https://pastebin.com/1ag1Z7jL

  

“说明”:“分配给的分片的副本太多了   具有属性[datacenter]的节点,总共配置了[2]   该分片ID和[3]总属性值的分片副本,   期望每个属性[2]分配的分片计数小于   或等于每个所需分片数的上限   属性[1]“

可能您使用了区域设置。 Elasticsearch将避免将主分片和副本分片放在同一区域中 https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-awareness.html

  

普通意识,如果一个区域与另一个区域失去联系,   Elasticsearch会将所有丢失的副本分片分配给a   单区。但在这个例子中,这突然的额外负载会导致   剩余区域中的硬件将被重载。

     

强制意识通过永远不允许副本来解决这个问题   同一个分片将被分配到同一个区域。

     

例如,假设我们有一个名为zone的感知属性,并且   我们知道我们将有两个区域,zone1和zone2。这是怎么回事   我们可以强制节点上的意识:

     

cluster.routing.allocation.awareness.force.zone.values:zone1,zone2   cluster.routing.allocation.awareness.attributes:zone