为什么特定分片未分配?

时间:2015-10-23 08:34:32

标签: elasticsearch

我的elasticsearch服务器上的两个特定分片(2.0.0-rc1)固执地未分配。这些在同一索引上始终是相同的分片(所有其他分片/索引都很好)。

forced them to reallocate使用/_cluster/reroute,群集变为绿色一段时间(几个小时),我今天早上看到问题又回来了。

我删除了索引并重新创建了它 - 同样的问题。所有节点都可以(它们可用,内存,堆,CPU都很好)。没有其他索引有问题(并且从未有过)。我还重新启动了所有节点。

特定索引的特定分片可能成为(并返回)“未分配”状态的原因是什么?

主节点的日志。它显示了一些关于重新定位分片(超出我的能力)的神秘信息

[2015-10-22 10:19:14,281][INFO ][node                     ] [eu4] version[2.0.0-rc1], pid[21767], build[4757962/2015-10-01T10:06:08Z]
[2015-10-22 10:19:14,281][INFO ][node                     ] [eu4] initializing ...
[2015-10-22 10:19:14,597][INFO ][plugins                  ] [eu4] loaded [], sites [head, kopf]
[2015-10-22 10:19:14,910][INFO ][env                      ] [eu4] using [1] data paths, mounts [[/ (/dev/sda3)]], net usable_space [217.8gb], net total_space [809gb], spins? [possibly], types [ext3]
[2015-10-22 10:19:18,603][INFO ][node                     ] [eu4] initialized
[2015-10-22 10:19:18,603][INFO ][node                     ] [eu4] starting ...
[2015-10-22 10:19:18,835][INFO ][transport                ] [eu4] publish_address {10.81.163.129:9300}, bound_addresses {10.81.163.129:9300}
[2015-10-22 10:19:18,849][INFO ][discovery                ] [eu4] security/XzHqhAC-Rmm4bOwD7Jxn5w
[2015-10-22 10:19:22,086][INFO ][cluster.service          ] [eu4] detected_master {eu5}{4nyuFXa5TTWi08Xtq9tkhg}{10.81.147.186}{10.81.147.186:9300}, added {{eu3}{fmjphsGPRu-Bj1ZerSEi4Q}{10.81.163.112}{10.81.163.112:9300}{master=true},{LENOV27}{fpDgEjXrRrmpNsZ4evXawA}{10.233.85.45}{10.233.85.45:9300}{master=false},{eu5}{4nyuFXa5TTWi08Xtq9tkhg}{10.81.147.186}{10.81.147.186:9300},{eu2}{nHV1K3b8RUWayqzpN-xf3w}{10.242.136.232}{10.242.136.232:9300},}, reason: zen-disco-receive(from master [{eu5}{4nyuFXa5TTWi08Xtq9tkhg}{10.81.147.186}{10.81.147.186:9300}])
[2015-10-22 10:19:22,747][INFO ][http                     ] [eu4] publish_address {10.81.163.129:9200}, bound_addresses {10.81.163.129:9200}
[2015-10-22 10:19:22,747][INFO ][node                     ] [eu4] started
[2015-10-22 10:19:23,144][DEBUG][action.admin.indices.stats] [eu4] [indices:monitor/stats] failed to execute operation for shard [[gnopgn1.html][4], node[XzHqhAC-Rmm4bOwD7Jxn5w], [P], v[36], s[INITIALIZING], a[id=moyNCX6WR5-D7azHXzPzCw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-10-22T08:15:48.363Z]]]
[gnopgn1.html][[gnopgn1.html][4]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: ShardNotFoundException[no such shard];
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:399)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:376)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:365)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: [gnopgn1.html][[gnopgn1.html][4]] ShardNotFoundException[no such shard]
    at org.elasticsearch.index.IndexService.shardSafe(IndexService.java:198)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:98)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:395)
    ... 7 more
[2015-10-22 10:19:23,150][DEBUG][action.admin.indices.stats] [eu4] [indices:monitor/stats] failed to execute operation for shard [[rwsnsr1.html][4], node[XzHqhAC-Rmm4bOwD7Jxn5w], [P], v[17], s[INITIALIZING], a[id=Z3r0lXjgT-6zRiArXoTnyA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-10-22T08:15:48.363Z]]]
[rwsnsr1.html][[rwsnsr1.html][4]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: ShardNotFoundException[no such shard];
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:399)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:376)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:365)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: [rwsnsr1.html][[rwsnsr1.html][4]] ShardNotFoundException[no such shard]
    at org.elasticsearch.index.IndexService.shardSafe(IndexService.java:198)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:98)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:395)
    ... 7 more
[2015-10-22 10:19:23,152][DEBUG][action.admin.indices.stats] [eu4] [indices:monitor/stats] failed to execute operation for shard [[wvarrr1.html][4], node[XzHqhAC-Rmm4bOwD7Jxn5w], [P], v[22], s[INITIALIZING], a[id=uQCmezoHTwmpjdW_6WXzUQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-10-22T08:15:48.363Z]]]
[wvarrr1.html][[wvarrr1.html][4]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: ShardNotFoundException[no such shard];
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:399)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:376)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:365)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: [wvarrr1.html][[wvarrr1.html][4]] ShardNotFoundException[no such shard]
    at org.elasticsearch.index.IndexService.shardSafe(IndexService.java:198)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:98)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:395)
    ... 7 more
[2015-10-22 10:19:23,153][DEBUG][action.admin.indices.stats] [eu4] [indices:monitor/stats] failed to execute operation for shard [[phppath][4], node[XzHqhAC-Rmm4bOwD7Jxn5w], [P], v[63], s[INITIALIZING], a[id=w9IukO9ySBSWK0uKGrFDSA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-10-22T08:15:48.364Z]]]
[phppath][[phppath][4]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: ShardNotFoundException[no such shard];
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:399)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:376)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:365)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: [phppath][[phppath][4]] ShardNotFoundException[no such shard]
    at org.elasticsearch.index.IndexService.shardSafe(IndexService.java:198)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:98)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:395)
    ... 7 more
[2015-10-22 10:19:27,107][DEBUG][action.admin.indices.stats] [eu4] [indices:monitor/stats] failed to execute operation for shard [[flex2gateway][3], node[XzHqhAC-Rmm4bOwD7Jxn5w], [R], v[21], s[INITIALIZING], a[id=r347ICHmQ6yVUp2K4iepkA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-10-22T08:15:48.362Z]]]
[flex2gateway][[flex2gateway][3]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]];
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:399)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:376)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:365)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: [flex2gateway][[flex2gateway][3]] IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]]
    at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:957)
    at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:791)
    at org.elasticsearch.index.shard.IndexShard.docStats(IndexShard.java:612)
    at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:131)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:395)
    ... 7 more
[2015-10-22 10:19:27,109][DEBUG][action.admin.indices.stats] [eu4] [indices:monitor/stats] failed to execute operation for shard [[parngo1.html][2], node[XzHqhAC-Rmm4bOwD7Jxn5w], [R], v[50], s[INITIALIZING], a[id=L6zq9BYXTHKpDQHgyvqNoA], unassigned_info[[reason=CLUSTER_RECOVERED], at[2015-10-22T08:15:48.360Z]]]
[parngo1.html][[parngo1.html][2]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: ShardNotFoundException[no such shard];
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:399)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:376)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:365)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: [parngo1.html][[parngo1.html][2]] ShardNotFoundException[no such shard]
    at org.elasticsearch.index.IndexService.shardSafe(IndexService.java:198)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:98)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:395)
    ... 7 more
[2015-10-22 10:24:39,980][DEBUG][action.admin.indices.stats] [eu4] [indices:monitor/stats] failed to execute operation for shard [[logs][4], node[XzHqhAC-Rmm4bOwD7Jxn5w], [R], v[12], s[INITIALIZING], a[id=wS-A9m3GSnWRTsuioUXt9w], unassigned_info[[reason=REPLICA_ADDED], at[2015-10-22T08:21:03.653Z]], expected_shard_size[76525]]
[logs][[logs][4]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]];
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:399)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:376)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:365)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: [logs][[logs][4]] IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]]
    at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:957)
    at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:791)
    at org.elasticsearch.index.shard.IndexShard.docStats(IndexShard.java:612)
    at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:131)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:395)
    ... 7 more
[2015-10-22 10:24:44,977][DEBUG][action.admin.indices.stats] [eu4] [indices:monitor/stats] failed to execute operation for shard [[logs][4], node[XzHqhAC-Rmm4bOwD7Jxn5w], [R], v[12], s[INITIALIZING], a[id=wS-A9m3GSnWRTsuioUXt9w], unassigned_info[[reason=REPLICA_ADDED], at[2015-10-22T08:21:03.653Z]], expected_shard_size[76525]]
[logs][[logs][4]] BroadcastShardOperationFailedException[operation indices:monitor/stats failed]; nested: IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]];
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:399)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:376)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:365)
    at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:350)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: [logs][[logs][4]] IllegalIndexShardStateException[CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED, RELOCATED]]
    at org.elasticsearch.index.shard.IndexShard.readAllowed(IndexShard.java:957)
    at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:791)
    at org.elasticsearch.index.shard.IndexShard.docStats(IndexShard.java:612)
    at org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:131)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)
    at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:395)
    ... 7 more
[2015-10-22 14:07:08,773][WARN ][monitor.jvm              ] [eu4] [gc][young][13667][8] duration [1.2s], collections [1]/[1.3s], total [1.2s]/[1.4s], memory [1.9gb]->[111.4mb]/[15.7gb], all_pools {[young] [1.8gb]->[15.8mb]/[1.8gb]}{[survivor] [75mb]->[65.6mb]/[232.9mb]}{[old] [8.6mb]->[31.4mb]/[13.7gb]}

编辑:评论中每个请求的额外命令

GET /_cat/shards?index=rwsnsr1.html&v

index        shard prirep state   docs store ip             node        
rwsnsr1.html 1     p      STARTED    0  156b 10.81.147.186  eu5         
rwsnsr1.html 1     r      STARTED    0  156b 10.233.85.45   LENOV27 
rwsnsr1.html 3     p      STARTED    0  156b 10.81.163.112  eu3         
rwsnsr1.html 3     r      STARTED    0  156b 10.81.163.129  eu4         
rwsnsr1.html 2     p      STARTED    0  156b 10.81.147.186  eu5         
rwsnsr1.html 2     r      STARTED    0  156b 10.242.136.232 eu2         
rwsnsr1.html 4     r      STARTED    0  156b 10.81.163.112  eu3         
rwsnsr1.html 4     p      STARTED    0  156b 10.81.163.129  eu4         
rwsnsr1.html 0     p      STARTED    0  156b 10.233.85.45   LENOV27
rwsnsr1.html 0     r      STARTED    0  156b 10.242.136.232 eu2    
GET /_cat/indices?v

health status index               pri rep docs.count docs.deleted store.size pri.store.size 
green  open   logstash-2015.01.30   5   1          3            0     30.9kb         15.5kb 
green  open   lcds                  5   1          0            0      1.5kb           780b 
green  open   .kibana               1   1         26            3    102.3kb         51.1kb 
green  open   flex2gateway          5   1          0            0      1.5kb           780b 
green  open   wvarrr1.html          5   1          0            0      1.5kb           780b 
green  open   webui                 5   1          0            0      1.5kb           780b 
green  open   logstash-2015.01.28   5   1          3            0     39.9kb         19.9kb 
green  open   messagebroker         5   1          0            0      1.5kb           780b 
green  open   spipe                 5   1          0            0      1.5kb           780b 
green  open   veaees1.html          5   1          0            0      1.5kb           780b 
green  open   rwsnsr1.html          5   1          0            0      1.5kb           780b 
green  open   phppath               5   1          0            0      1.5kb           780b 
green  open   scan_vulns            5   1    2139930            0      1.5gb        784.6mb 
green  open   nessus_logs           5   1    1852503            0        1gb        531.1mb 
green  open   blazeds               5   1          0            0      1.5kb           780b 
green  open   ogrnge1.html          5   1          0            0      1.5kb           780b 
green  open   wnoaog1.html          5   1          0            0      1.5kb           792b 
green  open   ips                   5   1      69405         3329     20.7mb         10.3mb 
green  open   enarow1.html          5   1          0            0      1.5kb           780b 
green  open   scan_constant         5   1      72673        22028    623.9mb          325mb 
green  open   gnopgn1.html          5   1          0            0      1.5kb           780b 
green  open   gawvoo1.html          5   1          0            0      1.5kb           780b 
green  open   ssllabs               5   1        183            0    215.6kb        107.8kb 
red    open   logs                  5   1        360            0    474.6kb        243.9kb 
green  open   parngo1.html          5   1          0            0      1.5kb           780b 
green  open   logstash-2015.01.29   5   1         11            0    141.9kb         70.9kb 
green  open   rossae1.html          5   1          0            0      1.5kb           780b 
green  open   perl                  5   1          0            0      1.5kb           780b 
GET /_cat/shards?index=logs&v

index shard prirep state      docs  store ip             node        
logs  1     p      STARTED      94 78.5kb 10.81.163.112  eu3         
logs  1     r      STARTED      94 24.5kb 10.242.136.232 eu2         
logs  3     p      UNASSIGNED                                        
logs  3     r      UNASSIGNED                                        
logs  2     r      STARTED      87 63.9kb 10.81.147.186  eu5         
logs  2     p      STARTED      87 23.1kb 10.233.85.45   LENOV27
logs  4     p      STARTED      96 65.3kb 10.81.163.129  eu4         
logs  4     r      STARTED      96 65.2kb 10.242.136.232 eu2         
logs  0     p      STARTED      83 76.9kb 10.81.147.186  eu5         
logs  0     r      STARTED      83 76.9kb 10.81.163.112  eu3      
curl -XGET -s "localhost:9200/_cat/segments?v&h=i,s,p,seg,g,v" |grep ^logs

logs                0 p _u      30 5.2.1
logs                0 p _v      31 5.2.1
logs                0 p _w      32 5.2.1
logs                0 p _x      33 5.2.1
logs                0 p _y      34 5.2.1
logs                0 p _z      35 5.2.1
logs                0 p _10     36 5.2.1
logs                0 p _11     37 5.2.1
logs                0 p _12     38 5.2.1
logs                0 r _k      20 5.2.1
logs                0 r _v      31 5.2.1
logs                0 r _w      32 5.2.1
logs                0 r _x      33 5.2.1
logs                0 r _y      34 5.2.1
logs                0 r _z      35 5.2.1
logs                0 r _10     36 5.2.1
logs                0 r _11     37 5.2.1
logs                0 r _12     38 5.2.1
logs                1 p _14     40 5.2.1
logs                1 p _15     41 5.2.1
logs                1 p _16     42 5.2.1
logs                1 p _17     43 5.2.1
logs                1 p _18     44 5.2.1
logs                1 p _19     45 5.2.1
logs                1 p _1a     46 5.2.1
logs                1 p _1b     47 5.2.1
logs                1 p _1c     48 5.2.1
logs                1 r _1e     50 5.2.1
logs                2 r _14     40 5.2.1
logs                2 r _15     41 5.2.1
logs                2 r _16     42 5.2.1
logs                2 r _17     43 5.2.1
logs                2 r _18     44 5.2.1
logs                2 r _19     45 5.2.1
logs                2 r _1a     46 5.2.1
logs                2 p _14     40 5.2.1
logs                4 p _14     40 5.2.1
logs                4 p _15     41 5.2.1
logs                4 p _16     42 5.2.1
logs                4 p _17     43 5.2.1
logs                4 p _18     44 5.2.1
logs                4 p _19     45 5.2.1
logs                4 p _1a     46 5.2.1
logs                4 r _14     40 5.2.1
logs                4 r _15     41 5.2.1
logs                4 r _16     42 5.2.1
logs                4 r _17     43 5.2.1
logs                4 r _18     44 5.2.1
logs                4 r _19     45 5.2.1
logs                4 r _1a     46 5.2.1
{
   "logs": {
      "settings": {
         "index": {
            "creation_date": "1445537462267",
            "uuid": "EUzsk149T8-Wo6Qvf_O8Vw",
            "number_of_replicas": "1",
            "number_of_shards": "5",
            "version": {
               "created": "2000051"
            }
         }
      }
   }
}    

0 个答案:

没有答案