在Cassandra nodetool修复期间打开的文件太多

时间:2015-02-16 08:48:02

标签: cassandra nodetool

我在一个节点上创建了一个nodetool repair命令。此节点关闭,日志文件显示以下错误消息:

INFO  [STREAM-IN-/192.168.2.100] 2015-02-13 21:36:23,077 StreamResultFuture.java:180 - [Stream #8fb54551-b3bd-11e4-9620-4b92877f0505] Session with /192.168.2.100 is complete
INFO  [STREAM-IN-/192.168.2.100] 2015-02-13 21:36:23,078 StreamResultFuture.java:212 - [Stream #8fb54551-b3bd-11e4-9620-4b92877f0505] All sessions completed
INFO  [STREAM-IN-/192.168.2.100] 2015-02-13 21:36:23,078 StreamingRepairTask.java:96 - [repair #508bd650-b3bd-11e4-9620-4b92877f0505] streaming task succeed, returning response to node4/192.168.2.104
INFO  [AntiEntropyStage:1] 2015-02-13 21:38:52,795 RepairSession.java:237 - [repair #508bd650-b3bd-11e4-9620-4b92877f0505] repcode is fully synced
INFO  [AntiEntropySessions:27] 2015-02-13 21:38:52,795 RepairSession.java:299 - [repair #508bd650-b3bd-11e4-9620-4b92877f0505] session completed successfully
INFO  [AntiEntropySessions:27] 2015-02-13 21:38:52,795 RepairSession.java:260 - [repair #03858e40-b3be-11e4-9620-4b92877f0505] new session: will sync node4/192.168.2.104, /192.168.2.100, /192.168.2.101 on range (8805399388216156805,8848902871518111273] for data.[repcode]
INFO  [AntiEntropySessions:27] 2015-02-13 21:38:52,795 RepairJob.java:145 - [repair #03858e40-b3be-11e4-9620-4b92877f0505] requesting merkle trees for repcode (to [/192.168.2.100, /192.168.2.101, node4/192.168.2.104])
WARN  [StreamReceiveTask:74] 2015-02-13 21:41:58,544 CLibrary.java:231 - open(/user/jlor/apache-cassandra/data/data/data/repcode-398f26f0b11511e49faf195596ed1fd9, O_RDONLY) failed, errno (23).
WARN  [STREAM-IN-/192.168.2.101] 2015-02-13 21:41:58,672 CLibrary.java:231 - open(/user/jlor/apache-cassandra/data/data/data/repcode-398f26f0b11511e49faf195596ed1fd9, O_RDONLY) failed, errno (23).
WARN  [STREAM-IN-/192.168.2.101] 2015-02-13 21:41:58,871 CLibrary.java:231 - open(/user/jlor/apache-cassandra/data/data/data/repcode-398f26f0b11511e49faf195596ed1fd9, O_RDONLY) failed, errno (23).
ERROR [StreamReceiveTask:74] 2015-02-13 21:41:58,986 CassandraDaemon.java:153 - Exception in thread Thread[StreamReceiveTask:74,5,main]
org.apache.cassandra.io.FSWriteError: java.io.FileNotFoundException: /user/jlor/apache-cassandra/data/data/data/repcode-398f26f0b11511e49faf195596ed1fd9/data-repcode-tmp-ka-245139-TOC.txt (Too many open files in system)
        at org.apache.cassandra.io.sstable.SSTable.appendTOC(SSTable.java:282) ~[apache-cassandra-2.1.2.jar:2.1.2]
        at org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:483) ~[apache-cassandra-2.1.2.jar:2.1.2]
        at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:434) ~[apache-cassandra-2.1.2.jar:2.1.2]
        at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:429) ~[apache-cassandra-2.1.2.jar:2.1.2]
        at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:424) ~[apache-cassandra-2.1.2.jar:2.1.2]
        at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:120) ~[apache-cassandra-2.1.2.jar:2.1.2]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_31]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_31]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_31]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_31]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_31]
Caused by: java.io.FileNotFoundException: /usr/jlo/apache-cassandra/data/data/data/repcode-398f26f0b11511e49faf195596ed1fd9/data-repcode-tmp-ka-245139-TOC.txt (Too many open files in system)
        at java.io.FileOutputStream.open(Native Method) ~[na:1.8.0_31]
        at java.io.FileOutputStream.<init>(FileOutputStream.java:213) ~[na:1.8.0_31]
        at java.io.FileWriter.<init>(FileWriter.java:107) ~[na:1.8.0_31]
        at org.apache.cassandra.io.sstable.SSTable.appendTOC(SSTable.java:276) ~[apache-cassandra-2.1.2.jar:2.1.2]
        ... 10 common frames omitted

我们有一个包含5个节点的小型集群:node0-node4。 我有一个约34亿行的表,副本3。 这是表格描述:

CREATE TABLE data.repcode (
rep int,
type text,
code text,
yyyymm int,
trd int,
eq map<text, bigint>,
iq map<text, bigint>,
PRIMARY KEY ((rep, type, pcode), yyyymm, trd))
WITH CLUSTERING ORDER BY (yyyymm ASC, co_trd ASC, md5 ASC)
AND bloom_filter_fp_chance = 0.1
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 'max_threshold': '32'}
AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

我正在使用Cassandra 2.1.2。 我已将所有节点的最大打开文件限制设置为200&#39;

在我发出nodetool repair命令之前,我已计算了数据目录中的文件数。以下是崩溃前每个节点的计数:

node0: 27'099 
node1: 27'187 
node2: 36'131 
node3: 26'635 
node4: 26'371 

崩溃后现在:

node0:   946'555 
node1:   973'531 
node2:   844'211 
node3: 1'024'147 
node4: 1'971'772 

一个unix目录中的文件数量增加到这样的延伸是否正常? 我该怎么做才能避免这种情况? 我将来如何避免这个问题?我应该增加打开文件的数量吗?这似乎对我来说已经很大了。 我的群集对于这么多的记录来说太小了吗? 我应该使用另一种压缩策略吗?

感谢您的帮助。

2 个答案:

答案 0 :(得分:2)

ulimit -a | grep "open files"的输出是什么?

Cassandra的以下recommended resource limits(ulimit)应设置如下(对于RHEL 6):

cassandra - memlock unlimited
cassandra - nofile 100000
cassandra - nproc 32768
cassandra - as unlimited

确切的文件和用户名将根据您的安装类型以及您运行Cassandra的用户而有所不同。上面假设这些行是来自/etc/security/limits.d/cassandra.conf的打包安装,运行Cassandra作为“cassandra”用户(对于你想要的/etc/security/limits.conf的tarball安装。)

如果您的设置与此不同,请查看我上面链接的文档。请注意,如果以root用户身份运行Cassandra,则某些发行版需要为root用户明确设置限制。

修改20180330

请注意,上述/etc/security/limit.conf调整适用于CentOS / RHEL 6系统。否则,应在/etc/security/limits.d/cassandra.conf

中进行调整

答案 1 :(得分:0)

打开文件的数量不应仅限于Cassandra安装。可能是10.000.000,完全没有问题。问题是,如果打开的文件过多:您的ss表太多,则会导致重新启动时间很长。为防止它发生,在所有打开文件数量明显超过初始数量的所有节点上,请使用nodetool comact。使用以下作为Cassandra软件所有者的用户来计算修复期间打开的文件数:

for i in {1..1000}; do echo -n "This is a test in loop $i "; lsof -p $(ps -ef | grep "/var/log/cassandra/gc.log" | grep -v "color" |  awk '{print $2}') | wc -l ; sleep 500; done