我使用Apache Spark从多个节点向cassandra写入数据。此过程持续约40分钟,然后一些cassandra节点崩溃,并出现以下错误:
ERROR [COMMIT-LOG-ALLOCATOR] 2016-04-28 12:10:42,628 JVMStabilityInspector.java:139 - JVM state determined to be unstable. Exiting forcefully due to:
java.io.FileNotFoundException: /data10/cass/commitlog/CommitLog-6-1461831890172.log (Too many open files)
at java.io.RandomAccessFile.open0(Native Method) ~[na:1.8.0_77]
at java.io.RandomAccessFile.open(RandomAccessFile.java:316) ~[na:1.8.0_77]
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243) ~[na:1.8.0_77]
at org.apache.cassandra.db.commitlog.MemoryMappedSegment.createBuffer(MemoryMappedSegment.java:61) ~[apache-cassandra-3.5.jar:3.5]
at org.apache.cassandra.db.commitlog.CommitLogSegment.<init>(CommitLogSegment.java:167) ~[apache-cassandra-3.5.jar:3.5]
at org.apache.cassandra.db.commitlog.MemoryMappedSegment.<init>(MemoryMappedSegment.java:46) ~[apache-cassandra-3.5.jar:3.5]
at org.apache.cassandra.db.commitlog.CommitLogSegment.createSegment(CommitLogSegment.java:124) ~[apache-cassandra-3.5.jar:3.5]
at org.apache.cassandra.db.commitlog.CommitLogSegmentManager$1.runMayThrow(CommitLogSegmentManager.java:122) ~[apache-cassandra-3.5.jar:3.5]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) [apache-cassandra-3.5.jar:3.5]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
如何解决这个问题?
我的cassandra集群有4个节点,复制因子为2。 每个节点的Xmx和Xms参数设置为16G 我加载的数据大约是150万张具有元数据的图像。每张图像大约10-30KB,存储在blob列中。