我们目前处于DSE节点决定退役的情况。看起来它首先遇到Too many open files
错误,然后决定从环中删除节点是可以的,因为the disk is FULL
。除了完整的哲学问题,有一个节点自行删除,磁盘只有1/4利用。
以下是日志文件中的相关条目:
ERROR [pool-1-thread-1] 2014-06-20 01:53:19,957 DiskHealthChecker.java (line 62) Error in finding disk space for directory /raid0/cassandra/data
java.io.IOException: Cannot run program "df": error=24, Too many open files
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
at java.lang.Runtime.exec(Runtime.java:617)
at java.lang.Runtime.exec(Runtime.java:485)
at org.apache.commons.io.FileSystemUtils.openProcess(FileSystemUtils.java:535)
at org.apache.commons.io.FileSystemUtils.performCommand(FileSystemUtils.java:482)
at org.apache.commons.io.FileSystemUtils.freeSpaceUnix(FileSystemUtils.java:396)
at org.apache.commons.io.FileSystemUtils.freeSpaceOS(FileSystemUtils.java:266)
at org.apache.commons.io.FileSystemUtils.freeSpaceKb(FileSystemUtils.java:200)
at org.apache.commons.io.FileSystemUtils.freeSpaceKb(FileSystemUtils.java:171)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:52)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:71)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:71)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:71)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:71)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:71)
at com.datastax.bdp.util.DiskHealthChecker.checkDiskSpace(DiskHealthChecker.java:71)
at com.datastax.bdp.util.DiskHealthChecker.access$000(DiskHealthChecker.java:18)
at com.datastax.bdp.util.DiskHealthChecker$DiskHealthCheckTask.run(DiskHealthChecker.java:104)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.IOException: error=24, Too many open files
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
... 24 more
INFO [pool-1-thread-1] 2014-06-20 01:53:19,959 DiskHealthChecker.java (line 82) Removing this node from the ring for the disk is close to FULL
INFO [pool-1-thread-1] 2014-06-20 01:53:19,996 StorageService.java (line 947) LEAVING: sleeping 30000 ms for pending range setup
ERROR [ReadStage:30] 2014-06-20 01:53:22,058 CassandraDaemon.java (line 191) Exception in thread Thread[ReadStage:30,5,main]
java.lang.RuntimeException: java.lang.RuntimeException: java.io.FileNotFoundException: /raid0/cassandra/data/linkcurrent_search/content_items/linkcurrent_search-content_items-ic-1803-Data.db (Too many open files)
at org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:64)
at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: /raid0/cassandra/data/linkcurrent_search/content_items/linkcurrent_search-content_items-ic-1803-Data.db (Too many open files)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:58)
at org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1213)
at org.apache.cassandra.io.sstable.SSTableScanner.<init>(SSTableScanner.java:66)
at org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1017)
at org.apache.cassandra.db.RowIteratorFactory.getIterator(RowIteratorFactory.java:72)
at org.apache.cassandra.db.ColumnFamilyStore.getSequentialIterator(ColumnFamilyStore.java:1432)
at org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1484)
at org.apache.cassandra.service.RangeSliceVerbHandler.executeLocally(RangeSliceVerbHandler.java:46)
at org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:58)
... 4 more
Caused by: java.io.FileNotFoundException: /raid0/cassandra/data/linkcurrent_search/content_items/linkcurrent_search-content_items-ic-1803-Data.db (Too many open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:241)
at org.apache.cassandra.io.util.RandomAccessReader.<init>(RandomAccessReader.java:67)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.<init>(CompressedRandomAccessReader.java:75)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:54)
... 12 more
答案 0 :(得分:1)
感谢您的发现,我们将禁用此功能并将其留给其他磁盘监控工具,以便在磁盘接近满时向管理员发出警报,以便管理员可以在接近满时执行某些操作。
答案 1 :(得分:1)
如果您还没有,可能需要设置
在dse.yaml文件中health_check_interval:0
现在启用此选项。