Elasticsearch有太多打开的文件

时间:2016-04-18 08:22:50

标签: elasticsearch logstash

问题:我有5个节点(1xMaster,1xClient,3xData)!它们都在同一个集群中运行。上传大量数据集后,我得到以下例外:

[2016-04-18 09:00:24,907][INFO ][node                     ] [Human Torch II] version[2.2.0], pid[68278], build[8ff36d1/2016-01-27T13:32:39Z]
[2016-04-18 09:00:24,908][INFO ][node                     ] [Human Torch II] initializing ...
[2016-04-18 09:00:25,483][INFO ][plugins                  ] [Human Torch II] modules [lang-expression, lang-groovy], plugins [], sites []
[2016-04-18 09:00:25,530][INFO ][env                      ] [Human Torch II] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [352.6gb], net total_space [464.8gb], spins? [unknown], types [hfs]
[2016-04-18 09:00:25,530][INFO ][env                      ] [Human Torch II] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-04-18 09:00:28,200][INFO ][node                     ] [Human Torch II] initialized
[2016-04-18 09:00:28,200][INFO ][node                     ] [Human Torch II] starting ...
[2016-04-18 09:00:28,322][INFO ][transport                ] [Human Torch II] publish_address {127.0.0.1:9300}, bound_addresses {[fe80::1]:9300}, {[::1]:9300}, {127.0.0.1:9300}
[2016-04-18 09:00:28,329][INFO ][discovery                ] [Human Torch II] TEST/xSxhxmpYQ9SPk4Ux8SufpQ
[2016-04-18 09:00:31,357][INFO ][cluster.service          ] [Human Torch II] new_master {Human Torch II}{xSxhxmpYQ9SPk4Ux8SufpQ}{127.0.0.1}{127.0.0.1:9300}{master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-04-18 09:00:31,371][INFO ][http                     ] [Human Torch II] publish_address {127.0.0.1:9200}, bound_addresses {[fe80::1]:9200}, {[::1]:9200}, {127.0.0.1:9200}
[2016-04-18 09:00:31,371][INFO ][node                     ] [Human Torch II] started
[2016-04-18 09:00:31,740][INFO ][gateway                  ] [Human Torch II] recovered [128] indices into cluster_state
[2016-04-18 09:00:50,810][INFO ][cluster.service          ] [Human Torch II] added {{Xi'an Chi Xan}{OQjiTz-sR0Wcg8yIYnbSBA}{127.0.0.1}{127.0.0.1:9301}{data=false, master=false},}, reason: zen-disco-join(join from node[{Xi'an Chi Xan}{OQjiTz-sR0Wcg8yIYnbSBA}{127.0.0.1}{127.0.0.1:9301}{data=false, master=false}])
[2016-04-18 09:00:56,049][INFO ][cluster.service          ] [Human Torch II] added {{Riot}{VZQyBWSxS_W3H33_Xpx7kw}{127.0.0.1}{127.0.0.1:9302}{master=false},}, reason: zen-disco-join(join from node[{Riot}{VZQyBWSxS_W3H33_Xpx7kw}{127.0.0.1}{127.0.0.1:9302}{master=false}])
[2016-04-18 09:01:01,727][INFO ][cluster.service          ] [Human Torch II] added {{Topaz}{SShnnKN7SHKaxBGmn3TCig}{127.0.0.1}{127.0.0.1:9303}{master=false},}, reason: zen-disco-join(join from node[{Topaz}{SShnnKN7SHKaxBGmn3TCig}{127.0.0.1}{127.0.0.1:9303}{master=false}])
[2016-04-18 09:01:15,400][INFO ][cluster.service          ] [Human Torch II] added {{Moondark}{j9oCYfm_TbW0cdEciwyBhQ}{127.0.0.1}{127.0.0.1:9304}{master=false},}, reason: zen-disco-join(join from node[{Moondark}{j9oCYfm_TbW0cdEciwyBhQ}{127.0.0.1}{127.0.0.1:9304}{master=false}])
[2016-04-18 09:01:30,174][WARN ][cluster.action.shard     ] [Human Torch II] [logstash-2015.09.26][0] received shard failed for [logstash-2015.09.26][0], node[j9oCYfm_TbW0cdEciwyBhQ], [P], v[17], s[INITIALIZING], a[id=p6bW6TXYS9yJGiWpUbDkrg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-04-18T07:00:31.474Z]], indexUUID [xgsq0ZPVQ5OIdadydVB9rA], message [failed recovery], failure [IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to open reader on writer]; nested: NotSerializableExceptionWrapper[/Users/Desktop/elasticsearch-2.2.0Data3/data/TEST/nodes/0/indices/logstash-2015.09.26/0/index/_0.si: Too many open files in system]; ]
[logstash-2015.09.26][[logstash-2015.09.26][0]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to open reader on writer]; nested: NotSerializableExceptionWrapper[/Users/Desktop/elasticsearch-2.2.0Data3/data/TEST/nodes/0/indices/logstash-2015.09.26/0/index/_0.si: Too many open files in system];
    at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:254)
    at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
    at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: [logstash-2015.09.26][[logstash-2015.09.26][0]] EngineCreationFailureException[failed to open reader on writer]; nested: NotSerializableExceptionWrapper[/Users/Desktop/elasticsearch-2.2.0Data3/data/TEST/nodes/0/indices/logstash-2015.09.26/0/index/_0.si: Too many open files in system];
    at org.elasticsearch.index.engine.InternalEngine.createSearcherManager(InternalEngine.java:308)
    at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:167)
    at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
    at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1450)
    at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1434)
    at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:925)
    at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:897)
    at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)
    ... 5 more
Caused by: NotSerializableExceptionWrapper[/Users/Desktop/elasticsearch-2.2.0Data3/data/TEST/nodes/0/indices/logstash-2015.09.26/0/index/_0.si: Too many open files in system]
    at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
    at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
    at java.nio.channels.FileChannel.open(FileChannel.java:287)
    at java.nio.channels.FileChannel.open(FileChannel.java:335)
    at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:82)
    at org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:186)
    at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
    at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
    at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109)
    at org.apache.lucene.codecs.lucene50.Lucene50SegmentInfoFormat.read(Lucene50SegmentInfoFormat.java:82)
    at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:362)
    at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:493)
    at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:490)
    at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:731)
    at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:683)
    at org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:490)
    at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:95)
    at org.elasticsearch.index.store.Store.readSegmentsInfo(Store.java:163)
    at org.elasticsearch.index.store.Store.readLastCommittedSegmentsInfo(Store.java:148)
    at org.elasticsearch.index.engine.Engine.readLastCommittedSegmentInfos(Engine.java:349)
    at org.elasticsearch.index.engine.InternalEngine.createSearcherManager(InternalEngine.java:298)
    ... 12 more
    Suppressed: NotSerializableExceptionWrapper[/Users/Desktop/elasticsearch-2.2.0Data3/data/TEST/nodes/0/indices/logstash-2015.09.26/0/index/_0.si: Too many open files in system]
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
        at java.nio.channels.FileChannel.open(FileChannel.java:287)
        at java.nio.channels.FileChannel.open(FileChannel.java:335)
        at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:82)
        at org.apache.lucene.store.FileSwitchDirectory.openInput(FileSwitchDirectory.java:186)
        at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
        at org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:89)
        at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109)
        at org.apache.lucene.codecs.lucene50.Lucene50SegmentInfoFormat.read(Lucene50SegmentInfoFormat.java:82)
        at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:362)
        at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:128)
        at org.elasticsearch.index.engine.Engine.readLastCommittedSegmentInfos(Engine.java:345)
        ... 13 more

我不能再开始进行弹性搜索了。 所以问题:

  1. 是否有上传数据大小限制?
  2. 我尝试使用sudo ulimit -n 65535来增加最大打开文件的数量,但它不起作用。这是实际问题吗?
  3. 处理大数据的最佳方法是什么?
  4. 堆大小可能是例外的原因吗?
  5. 更新:curl -s -XGET&#39; localhost:9200 / _cat / nodes?v&amp; h = ip,fdc,fdm&#39;

    ip         fdc  fdm 
    127.0.0.1 2588 9000 
    127.0.0.1 1942 9000 
    127.0.0.1 1896 9000 
    127.0.0.1 2823 9000 
    127.0.0.1  338 9000 
    

    感谢您的帮助:)

1 个答案:

答案 0 :(得分:1)

好的,所以你在同一台主机上有5个节点,最多可以打开9000个文件。如果您将第二列总结为高于该数字,那么就会出现错误。

为了在启动期间查看您的ES配置了多少个最大打开文件,您可以使用-Des.max-open-files=true开始您的流程,您的日志将显示您可以拥有的最多打开文件数。

检查herehere(取决于您拥有的Linux发行版)如何为Linux发行版配置该设置,但您可能还需要调整/etc/security/limits.conf。< / p>