我在azure上有3个主节点+3个数据节点elasticsearch集群。我正在尝试执行批量操作,但我得到关于节点本身的失败错误,这是我如何设置我的客户端:
final Builder builder = Settings.builder();
final org.elasticsearch.client.transport.TransportClient.Builder transBuilder = TransportClient.builder();
builder.put("cluster.name", esCluster);
if (esShield) {
builder.put("shield.user", esUsername + ":" + esPassword);
transBuilder.addPlugin(ShieldPlugin.class);
}
final Settings settings = builder.build();
TransportClient esClient = transBuilder.settings(settings).build();
final String[] hosts = esHost.split(",");
for (String host : hosts) {
esClient.addTransportAddress(new InetSocketTransportAddress(new InetSocketAddress(host, Integer.parseInt(esPort))));
}
这是批量操作:
BulkProcessor bulkProcessor = BulkProcessor.builder(getClient(), new BulkProcessor.Listener() {
@Override
public void beforeBulk(long executionId, BulkRequest request) {
LOGGER.info("Going to execute new bulk composed of {" + request.numberOfActions() + "} actions");
}
@Override
public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
LOGGER.info("Executed bulk composed of {" + request.numberOfActions() + "} actions");
}
@Override
public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
LOGGER.info("Error executing bulk");
failure.printStackTrace();
}
}).setBulkActions(docs.size()).setConcurrentRequests(250).build();
for (DBObject doc : docs) {
bulkProcessor.add(getClient().prepareIndex(indexName, typeName).setSource(doc.toMap()).request());
}
对于这样的1,000个记录批次,它开始响应很好:
要执行由{1001}操作组成的新批量
执行由{1001}操作组成的批量
然后我开始收到以下错误:
transport:383 - [Stanley Stewart]无法获取{#transport#-1} {10.0.0.10} {10.0.0.10:9300}的节点信息,断开连接... ReceiveTimeoutTransportException [[] [10.0.0.10:9300] [cluster :monitor/nodes/liveness] request_id [60]在[5000ms]之后超时 at org.elasticsearch.transport.TransportService $ TimeoutHandler.run(TransportService.java:679) 在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:617) 在java.lang.Thread.run(Thread.java:745)
最后我收到以下错误:
批量:148 - [Stanley Stewart]无法执行批量请求1。 NoNodeAvailableException [没有配置的节点可用:[{#transport#-1} {10.0.0.10} {10.0.0.10:9300},{#transport#-2} {10.0.0.11} {10.0.0.11:9300} ,{#transport#-3} {10.0.0.12} {10.0.0.12:9300}]] at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:290) at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:207) at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55) at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:288) 在org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359) 在org.elasticsearch.client.support.AbstractClient.bulk(AbstractClient.java:436) 在org.elasticsearch.action.bulk.Retry $ AbstractRetryHandler.execute(Retry.java:219) at org.elasticsearch.action.bulk.Retry.withAsyncBackoff(Retry.java:72) at org.elasticsearch.action.bulk.BulkRequestHandler $ AsyncBulkRequestHandler.execute(BulkRequestHandler.java:121) 在org.elasticsearch.action.bulk.BulkProcessor.execute(BulkProcessor.java:312) at org.elasticsearch.action.bulk.BulkProcessor.executeIfNeeded(BulkProcessor.java:303) 在org.elasticsearch.action.bulk.BulkProcessor.internalAdd(BulkProcessor.java:285) 在org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:268) 在org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:264) 在org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:250)
有人可以帮助我,请弄清楚发生了什么以及如何解决它?
答案 0 :(得分:0)
可能是因为索引的刷新间隔太低了。尝试在批量处理之前将索引的刷新间隔设置为-1。批量处理完成后,您可以重置它。
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html#bulk