Elasticsearch索引已损坏

时间:2015-02-02 10:44:27

标签: elasticsearch lucene

我从弹性搜索框中获取以下日志:

org.apache.lucene.index.CorruptIndexException: [myindex][2] Preexisting corrupted index [corrupted_5Y_pGXmYQOG5PGlZURWqxw] caused by: CorruptIndexException[checksum failed (hardware problem?) : expected=9cf1207c actual=4eda74a3 (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/mnt/vol1/myindex/nodes/0/myindex/index/2/index/_3758.fdt")))]
org.apache.lucene.index.CorruptIndexException: checksum failed (hardware problem?) : expected=9cf1207c actual=4eda74a3 (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/mnt/vol1/my/indexnodes/0/indices/myindex/2/index/_3758.fdt")))
    at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:211)
    at org.apache.lucene.codecs.CodecUtil.checksumEntireFile(CodecUtil.java:268)
    at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.checkIntegrity(CompressingStoredFieldsReader.java:535)
    at org.apache.lucene.index.SegmentReader.checkIntegrity(SegmentReader.java:624)
    at org.apache.lucene.index.SegmentMerger.<init>(SegmentMerger.java:61)
    at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4158)
    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3768)
    at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
    at org.apache.lucene.index.TrackingConcurrentMergeScheduler.doMerge(TrackingConcurrentMergeScheduler.java:106)
    at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)

    at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:452)
    at org.elasticsearch.index.store.Store.failIfCorrupted(Store.java:433)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:725)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:578)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:182)
    at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:431)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:153)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)

有人能说出如何解决这个问题吗? 此外,减少此类问题的最佳做法是什么?

之前我遇到类似的问题,我必须删除副本框内容并将其重新分配给群集。那时我已经修好了几天,但今天又重新出现了。

修改: 问题是所有弹性搜索框都共享相同的硬盘,因此当多个副本尝试在同一磁盘位置上写入时,磁盘崩溃了。这是错误的,现在我为每个副本创建了单独的磁盘。

1 个答案:

答案 0 :(得分:0)

这取决于你是什么ES版本。在 1.3.2 之前,您可以尝试set the indices recovery compression to false

我在1.3.2上遇到过这个例外。原因是一个完整的磁盘。有些碎片在一段时间后恢复,有些则没有。重新索引有帮助。