弹性搜索主灾难恢复

时间:2017-01-24 15:23:09

标签: elasticsearch sharding

我们有一个弹性搜索集群,有5个数据节点和2个主节点。始终禁用一个主节点上的弹性搜索服务,以便始终只有一个主节点处于活动状态。今天由于某种原因,当前的主节点已关闭。我们在第二个主节点上启动了该服务。连接到新主服务器的所有数据节点,所有主要分片都已成功分配,但所有副本都没有分配,我留下了近384个未分配的分片。

我现在应该做什么,分配它们?

在这种情况下,必须采取的最佳做法和步骤是什么?

以下是我的http://es-master-node:9200/_settings的样子:http://pastebin.com/mK1QBfP6

当我尝试手动分配分片时,出现以下错误:

➜  Desktop curl -XPOST http://localhost:9200/_cluster/reroute\?pretty -d '{
  "commands": [
    {
      "allocate": {
        "index": "logstash-1970.01.18",
        "shard": 1,
        "node": "node-name",
        "allow_primary": true
      }
    }
  ]
}'
{
  "error" : {
    "root_cause" : [ {
      "type" : "illegal_argument_exception",
      "reason" : "[allocate] allocation of [logstash-1970.01.18][1] on node {node-name}{vrVG4CBbSvubWHOzn2qfQA}{10.100.0.146}{10.100.0.146:9300}{master=false} is not allowed, reason: [YES(allocation disabling is ignored)][NO(more than allowed [85.0%] used disk on node, free: [13.671127301258165%])][YES(shard not primary or relocation disabled)][YES(target node version [2.2.0] is same or newer than source node version [2.2.0])][YES(no allocation awareness enabled)][YES(shard is not allocated to same node or host)][YES(allocation disabling is ignored)][YES(below shard recovery limit of [2])][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(node passes include/exclude/require filters)][YES(primary is already active)]"
    } ],
    "type" : "illegal_argument_exception",
    "reason" : "[allocate] allocation of [logstash-1970.01.18][1] on node {node-name}{vrVG4CBbSvubWHOzn2qfQA}{10.100.0.146}{10.100.0.146:9300}{master=false} is not allowed, reason: [YES(allocation disabling is ignored)][NO(more than allowed [85.0%] used disk on node, free: [13.671127301258165%])][YES(shard not primary or relocation disabled)][YES(target node version [2.2.0] is same or newer than source node version [2.2.0])][YES(no allocation awareness enabled)][YES(shard is not allocated to same node or host)][YES(allocation disabling is ignored)][YES(below shard recovery limit of [2])][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(node passes include/exclude/require filters)][YES(primary is already active)]"
  },
  "status" : 400
}

任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:0)

所以,以下是我为分配未分配的分片所做的事情:

生成5个新的ES-DATA服务器并等待它们加入群集。一旦他们进入集群,我就使用了以下脚本:

#!/bin/bash
array=(node1 node2 node3 node4 node5)
node_counter=0
length=${#array[@]}
IFS=$'\n'
for line in $(curl -s 'http://ip-adress:9200/_cat/shards'|  fgrep UNASSIGNED); do
    INDEX=$(echo $line | (awk '{print $1}'))
    SHARD=$(echo $line | (awk '{print $2}'))
    NODE=${array[$node_counter]}
    echo $NODE
    curl -XPOST 'http://IP-adress:9200/_cluster/reroute' -d '{
        "commands": [
        {
            "allocate": {
                "index": "'$INDEX'",
                "shard": '$SHARD',
                "node": "'$NODE'",
                "allow_primary": true
            }
        }
        ]
    }'
    node_counter=$(((node_counter)%length +1))
done

将未分配的分片分配给新数据节点。群集需要大约5到6才能再次恢复。虽然这是黑客攻击,但相关的答案会更有意义。

以下是未解答的问题:

  • 旧节点上已经存在碎片,为什么ES-Master没有意识到这一点?
  • 我们如何明确要求ES-MASTER扫描已存在的数据节点并从中获取信息(关于它们的当前状态,它们具有的副本,它们包含的碎片等)