如何在ES中分配未分配的分片

时间:2017-04-24 05:41:25

标签: elasticsearch lucene sharding

我在elasticsearch中有以下设置

[root elasticsearch]$ curl localhost:9200/_cluster/health?pretty
{
"cluster_name" : "iresbi",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 10,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 0.0
}

我有3个节点既充当数据节点又充当主节点,目前集群中的搜索失败,出现以下异常

[2017-04-24T01:36:44,134][DEBUG][o.e.a.s.TransportSearchAction] [node-1] All shards failed for phase: [query]
org.elasticsearch.action.NoShardAvailableActionException: null
Caused by: org.elasticsearch.action.NoShardAvailableActionException

当我在碎片上做了一只猫时,我得到了以下输出

[root elasticsearch]$ curl localhost:9200/_cat/shards?pretty
customer 4 p UNASSIGNED    
customer 4 r UNASSIGNED    
customer 2 p UNASSIGNED    
customer 2 r UNASSIGNED    
customer 3 p UNASSIGNED    
customer 3 r UNASSIGNED    
customer 1 p UNASSIGNED    
customer 1 r UNASSIGNED    
customer 0 p UNASSIGNED    
customer 0 r UNASSIGNED

跟随磁盘空间使用情况:

[root elasticsearch]$ df
Filesystem                        1K-blocks    Used Available Use%     Mounted on
/dev/mapper/vg_root-root            8125880 1587988   6102080  21% /
devtmpfs                            3994324       0   3994324   0%   /dev
tmpfs                               4005212       4   4005208   1%     /dev/shm
tmpfs                               4005212    8624   3996588   1% /run
tmpfs                               4005212       0   4005212   0%   /sys/fs/cgroup
/dev/vda3                            999320    1320    945572   1%   /crashdump
/dev/vda1                            245679  100027    132545  44% /boot
/dev/mapper/vg_root-var             6061632 5727072      3604 100%   /var
/dev/mapper/vg_root-tmp             1998672    6356   1871076   1%  /tmp
/dev/mapper/vg_root-var_log         1998672   55800   1821632   3%  /var/log
/dev/mapper/vg_root-apps           25671908  292068  24052736   2%  /apps
/dev/mapper/vg_root-home            1998672  169996   1707436  10% /home
/dev/mapper/vg_root-var_log_audit   1998672    8168   1869264   1% /var/log/audit
/dev/vdb                          257898948   61464 244713900   1% /data
tmpfs                                801044       0    801044   0%      /run/user/1000

需要重新分配这些分片,我可以再向群集添加一个节点,这会解决问题吗?如何解决这个问题?

1 个答案:

答案 0 :(得分:0)

根据others收集的一些信息, 如果您没有调整/etc/elasticsearch.yml,

elasticsearch的数据将存储在/var/lib/elasticsearch/

所以你的/ var是100%可能是你问题的原因。

正确的解决方案将取决于分片中的数据量,是否存在副本,以及/ data挂载点是否是您打算用于elasticsearch的数据。

在所有情况下,通过将索引数据迁移到具有足够空间的文件系统,可以正确地完成解析

另一个人已经问过并回复了迁移方法here