我正在运行一个2节点的elasticsearch集群,并且我的所有索引都配置了2个主分片和1个副本。起初我认为每个节点都会存储1个主分片和1个副本,尽管这不是正在发生的事情。
curl -XGET http://localhost:9200/_cat/shards
.kibana 0 p STARTED 1 3.1kb 10.151.6.98 Eleggua
.kibana 0 r UNASSIGNED
logstash-sflow-2016.10.03 1 p STARTED 738 644.4kb 10.151.6.98 Eleggua
logstash-sflow-2016.10.03 1 r UNASSIGNED
logstash-sflow-2016.10.03 0 p STARTED 783 618.4kb 10.151.6.98 Eleggua
logstash-sflow-2016.10.03 0 r UNASSIGNED
logstash-ipf-2016.10.03 1 p STARTED 8480 3.9mb 10.151.6.98 Eleggua
logstash-ipf-2016.10.03 1 r UNASSIGNED
logstash-ipf-2016.10.03 0 p STARTED 8656 6.3mb 10.151.6.98 Eleggua
logstash-ipf-2016.10.03 0 r UNASSIGNED
logstash-raw-2016.10.03 1 p STARTED 254 177.9kb 10.151.6.98 Eleggua
logstash-raw-2016.10.03 1 r UNASSIGNED
logstash-raw-2016.10.03 0 p STARTED 274 180kb 10.151.6.98 Eleggua
logstash-raw-2016.10.03 0 r UNASSIGNED
logstash-pf-2016.10.03 1 p STARTED 4340 2.9mb 10.151.6.98 Eleggua
logstash-pf-2016.10.03 1 r UNASSIGNED
logstash-pf-2016.10.03 0 p STARTED 4234 5.7mb 10.151.6.98 Eleggua
logstash-pf-2016.10.03 0 r UNASSIGNED
如上所示,每个分片都由一个节点托管,并且没有分配副本。
curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es_gts_seginfo",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 9,
"active_shards" : 9,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 9,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 50.0
}
我做错了什么?
答案 0 :(得分:0)
感谢大家,我能够找出问题所在。我的一个节点运行2.4.0,另一个运行2.4.1。这种方式重新路由无法正常工作。
curl -XPOST -d '{ "commands" : [ {
> "allocate" : {
> "index" : ".kibana",
> "shard" : 0,
> "node" : "proc-gts-elk01",
> "allow_primary":true
> }
> } ] }' http://localhost:9200/_cluster/reroute?pretty
{
"error" : {
"root_cause" : [ {
"type" : "illegal_argument_exception",
"reason" : "[allocate] allocation of [.kibana][0] on node {proc-gts-elk01}{dhLrHPqTR0y9IkU_kFS5Cw}{10.151.6.19}{10.151.6.19:9300}{max_local_storage_nodes=1, hostname=proc-gts-elk01, data=yes, master=yes} is not allowed, reason: [YES(below shard recovery limit of [2])][YES(node passes include/exclude/require filters)][YES(primary is already active)][YES(enough disk for shard on node, free: [81.4gb])][YES(shard not primary or relocation disabled)][YES(shard is not allocated to same node or host)][YES(allocation disabling is ignored)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(node meets awareness requirements)][YES(allocation disabling is ignored)][NO(target node version [2.4.0] is older than source node version [2.4.1])]"
} ],
"type" : "illegal_argument_exception",
"reason" : "[allocate] allocation of [.kibana][0] on node {proc-gts-elk01}{dhLrHPqTR0y9IkU_kFS5Cw}{10.151.6.19}{10.151.6.19:9300}{max_local_storage_nodes=1, hostname=proc-gts-elk01, data=yes, master=yes} is not allowed, reason: [YES(below shard recovery limit of [2])][YES(node passes include/exclude/require filters)][YES(primary is already active)][YES(enough disk for shard on node, free: [81.4gb])][YES(shard not primary or relocation disabled)][YES(shard is not allocated to same node or host)][YES(allocation disabling is ignored)][YES(total shard limit disabled: [index: -1, cluster: -1] <= 0)][YES(node meets awareness requirements)][YES(allocation disabling is ignored)][NO(target node version [2.4.0] is older than source node version [2.4.1])]"
},
"status" : 400
}