Elasticsearch上的“打开文件太多”

时间:2016-03-14 14:21:47

标签: elasticsearch logstash

我有一个集群ES,从2天前我有这个错误:

java -XX:OnOutOfMemoryError="taskkill /PID %%p" -Xmx%memory%K -jar MyApp.jar

如果我运行[2016-03-14 15:08:48,342][WARN ][cluster.action.shard] [node-01] [logstash-2016.03.14][2] received shard failed for [logstash-2016.03.14][2], node[72oHnFiXTVqgXaUYKTAu2Q], [P], s[INITIALIZING], indexUUID [AKybrAEZRXebyRxcmzTqJQ], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[logstash-2016.03.14][2] failed recovery]; nested: EngineCreationFailureException[[logstash-2016.03.14][2] failed to open reader on writer]; nested: FileSystemException[/var/lib/elasticsearch/elasticsearch/nodes/0/indices/logstash-2016.03.14/2/index/_lg_Lucene410_0.dvm: Too many open files]; ]] 我有这个输出:

curl -XGET 'http://localhost:9200/_nodes?os=true&process=true&pretty=true' | grep "max_file_descriptors"

我已经拥有文件/etc/security/limit.conf:

"max_file_descriptors" : 65535,
"max_file_descriptors" : 65535,

如果我检查索引的状态我有一些索引RED ....因此logstash出错并崩溃所有

1 个答案:

答案 0 :(得分:1)

只需将elasticsearch - nofile设置为无限制。 并确保您与合适的用户一起运行。