你可能这是一个愚蠢的问题,但我没有通过谷歌找到答案。
所以我拥有:
java 1.7
cassandra 1.2.8使用-Xmx1G和-Xms1G在单节点中运行而不对yaml文件进行任何更改
我创建了下一个测试列系列:
CREATE COLUMN FAMILY TEST_HUGE_SF
WITH comparator = UTF8Type
AND key_validation_class=UTF8Type;
然后我尝试在此列族中插入行。 我使用astyanax lib访问cassandra:
final long START = 1;
final long MAX_ROWS_COUNT = 1000000000; // 1 Billion
Keyspace keyspace = AstyanaxProvider.getAstyanaxContext().getClient();
ColumnFamily<String, String> cf = new ColumnFamily<>(
"TEST_HUGE_SF",
StringSerializer.get(),
StringSerializer.get());
MutationBatch mb = keyspace.prepareMutationBatch()
.withRetryPolicy(new BoundedExponentialBackoff(250, 5000, 20));
for (long i = START; i<MAX_ROWS_COUNT; i++) {
long t = i % 1000;
if (t == 0) {
System.out.println("pushed: " + i);
mb.execute();
Thread.sleep(1);
mb = keyspace.prepareMutationBatch()
.withRetryPolicy(new BoundedExponentialBackoff(250, 5000, 20));
}
ColumnListMutation<String> clm = mb.withRow(cf, String.format("row_%012d", i));
clm.putColumn("col1", i);
clm.putColumn("col2", t);
}
mb.execute();
从代码中我可以看到,我尝试插入10亿行,每行包含两列,每列包含简单的long值。
插入~1220万行后, - cassandra崩溃了OutOfMemoryError。 在日志中有下一个:
INFO [CompactionExecutor:1571] 2014-08-08 08:31:45,334 CompactionTask.java (line 263) Compacted 4 sstables to [\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-2941,]. 865 252 169 bytes to 901 723 715 (~104% of original) in 922 963ms = 0,931728MB/s. 26 753 257 total rows, 26 753 257 unique. Row merge counts were {1:26753257, 2:0, 3:0, 4:0, }
INFO [CompactionExecutor:1571] 2014-08-08 08:31:45,337 CompactionTask.java (line 106) Compacting [SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-2069-Data.db'), SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-629-Data.db'), SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-2941-Data.db'), SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-1328-Data.db')]
ERROR [CompactionExecutor:1571] 2014-08-08 08:31:46,167 CassandraDaemon.java (line 132) Exception in thread Thread[CompactionExecutor:1571,1,main]
java.lang.OutOfMemoryError
at sun.misc.Unsafe.allocateMemory(Native Method)
at org.apache.cassandra.io.util.Memory.<init>(Memory.java:52)
at org.apache.cassandra.io.util.Memory.allocate(Memory.java:60)
at org.apache.cassandra.utils.obs.OffHeapBitSet.<init>(OffHeapBitSet.java:40)
at org.apache.cassandra.utils.FilterFactory.createFilter(FilterFactory.java:143)
at org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:137)
at org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:126)
at org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.<init>(SSTableWriter.java:445)
at org.apache.cassandra.io.sstable.SSTableWriter.<init>(SSTableWriter.java:92)
at org.apache.cassandra.db.ColumnFamilyStore.createCompactionWriter(ColumnFamilyStore.java:1958)
at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:144)
at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:59)
at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:62)
at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:191)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
INFO [CompactionExecutor:1570] 2014-08-08 08:31:46,994 CompactionTask.java (line 263) Compacted 4 sstables to [\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-3213,]. 34 773 524 bytes to 35 375 883 (~101% of original) in 44 162ms = 0,763939MB/s. 1 151 482 total rows, 1 151 482 unique. Row merge counts were {1:1151482, 2:0, 3:0, 4:0, }
INFO [CompactionExecutor:1570] 2014-08-08 08:31:47,105 CompactionTask.java (line 106) Compacting [SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-2069-Data.db'), SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-629-Data.db'), SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-2941-Data.db'), SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-1328-Data.db')]
ERROR [CompactionExecutor:1570] 2014-08-08 08:31:47,110 CassandraDaemon.java (line 132) Exception in thread Thread[CompactionExecutor:1570,1,main]
java.lang.OutOfMemoryError
at sun.misc.Unsafe.allocateMemory(Native Method)
at org.apache.cassandra.io.util.Memory.<init>(Memory.java:52)
at org.apache.cassandra.io.util.Memory.allocate(Memory.java:60)
at org.apache.cassandra.utils.obs.OffHeapBitSet.<init>(OffHeapBitSet.java:40)
at org.apache.cassandra.utils.FilterFactory.createFilter(FilterFactory.java:143)
at org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:137)
at org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:126)
at org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.<init>(SSTableWriter.java:445)
at org.apache.cassandra.io.sstable.SSTableWriter.<init>(SSTableWriter.java:92)
at org.apache.cassandra.db.ColumnFamilyStore.createCompactionWriter(ColumnFamilyStore.java:1958)
at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:144)
at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:59)
at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:62)
at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:191)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
当我看到cassandra在sstables压缩过程中崩溃。
这是否意味着要处理更多行,cassandra需要更多堆空间?
我预计缺少堆空间只会影响性能。有人可以形容,为什么我的期望是错的?
答案 0 :(得分:1)
有人注意到这一点 - 1GB堆非常小。使用Cassandra 2.0,您可以查看此调优指南以获取更多信息: http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_tune_jvm_c.html
另一个考虑因素是如何处理垃圾收集。在cassandra日志目录中,还应该有GC日志,指示集合的频率和时长。如果需要,您可以使用jvisualvm实时监控它们。