Fielddata永远不会被驱逐,即使它使用的不仅仅是indices.fielddata.cache.size

时间:2015-04-08 00:53:31

标签: elasticsearch

我们设置了indices.fielddata.cache.size = '6gb',但即使字段数据缓存使用的不止于此,也不会发生驱逐。断路器最终触发:

"RemoteTransportException[[elasticsearch][inet[/0.0.0.0:9300]][indices:data/read/search[phase/query]]]; nested: ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [myField] would be larger than limit of [9437184000/8.7gb]]; nested: UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [myField] would be larger than limit of [9437184000/8.7gb]]; nested: CircuitBreakingException[[FIELDDATA] Data too large, data for [myField] would be larger than limit of [9437184000/8.7gb]];

所有现场数据设置:

indices.fielddata.cache.size: "6gb"
indices.breaker.fielddata.limit: "60%"
indices.breaker.request.limit: "30%"
indices.breaker.total.limit: "70%"

以下是群集的字段数据大小(调用/_stats/fielddata?fields=*&human&pretty):

"fielddata" : {
  "memory_size" : "168.1gb",
  "memory_size_in_bytes" : 180591558840,
  "evictions" : 0
},

根据我的理解,最大尺寸应限制在6 gb * 24节点= 144 gb。

0 个答案:

没有答案