超过memtable_cleanup_threshold时处理Cassandra阻止写入

时间:2019-02-13 05:47:09

标签: scala cassandra spark-streaming cassandra-3.0

我正在阅读cassandra的冲洗策略,并遇到以下陈述-

 If the data to be flushed exceeds the memtable_cleanup_threshold, Cassandra blocks writes until the next flush succeeds.

现在我的查询是,让我们每秒疯狂地向cassandra写入10K记录,并且应用程序正在24 * 7运行。为避免阻塞,我们应该在以下参数中进行哪些设置。

memtable_heap_space_in_mb 
memtable_offheap_space_in_mb 
memtable_cleanup_threshold

&既然这是一个时间序列数据,我是否还需要使用压缩策略进行任何更改。如果是,那么最适合我的情况。

我的Spark应用程序从kafka获取数据并不断地插入Cassandra中,在特定时间后挂起,我当时进行了分析,nodetool compactionstats中有许多待处理的任务。

nodetool tablehistograms



 %       SSTables   WL             RL             P Size        Cell Count
                            (ms)           (ms)           (bytes)
50%     642.00    88.15           25109.16     310         24
75%     770.00    263.21         668489.53   535         50
95%     770.00    4055.27       668489.53   3311       310
98%     770.00    8409.01       668489.53   73457     6866
99%     770.00    12108.97     668489.53   219342   20501
Min      4.00        11.87           20924.30     150         9
Max     770.00    1996099.05 668489.53   4866323 454826


Keyspace : trackfleet_db
    Read Count: 7183347
    Read Latency: 15.153115504235004 ms
    Write Count: 2402229293
    Write Latency: 0.7495135263492935 ms
    Pending Flushes: 1
        Table: locationinfo
        SSTable count: 3307
        Space used (live): 62736956804
        Space used (total): 62736956804
        Space used by snapshots (total): 10469827269
        Off heap memory used (total): 56708763
        SSTable Compression Ratio: 0.38214618375483633
        Number of partitions (estimate): 493571
        Memtable cell count: 2089
        Memtable data size: 1168808
        Memtable off heap memory used: 0
        Memtable switch count: 88033
        Local read count: 765497
        Local read latency: 162.880 ms
        Local write count: 782044138
        Local write latency: 1.859 ms
        Pending flushes: 0
        Percent repaired: 0.0
        Bloom filter false positives: 368
        Bloom filter false ratio: 0.00000
        Bloom filter space used: 29158176
        Bloom filter off heap memory used: 29104216
        Index summary off heap memory used: 7883835
        Compression metadata off heap memory used: 19720712
        Compacted partition minimum bytes: 150
        Compacted partition maximum bytes: 4866323
        Compacted partition mean bytes: 7626
        Average live cells per slice (last five minutes): 3.5
        Maximum live cells per slice (last five minutes): 6
        Average tombstones per slice (last five minutes): 1.0
        Maximum tombstones per slice (last five minutes): 1
        Dropped Mutations: 359

更改压缩策略后:-

Keyspace : trackfleet_db
    Read Count: 8568544
    Read Latency: 15.943608060365916 ms
    Write Count: 2568676920
    Write Latency: 0.8019530641630868 ms
    Pending Flushes: 1
        Table: locationinfo
        SSTable count: 5843
        SSTables in each level: [5842/4, 0, 0, 0, 0, 0, 0, 0, 0]
        Space used (live): 71317936302
        Space used (total): 71317936302
        Space used by snapshots (total): 10469827269
        Off heap memory used (total): 105205165
        SSTable Compression Ratio: 0.3889946058934169
        Number of partitions (estimate): 542002
        Memtable cell count: 235
        Memtable data size: 131501
        Memtable off heap memory used: 0
        Memtable switch count: 93947
        Local read count: 768148
        Local read latency: NaN ms
        Local write count: 839003671
        Local write latency: 1.127 ms
        Pending flushes: 1
        Percent repaired: 0.0
        Bloom filter false positives: 1345
        Bloom filter false ratio: 0.00000
        Bloom filter space used: 54904960
        Bloom filter off heap memory used: 55402400
        Index summary off heap memory used: 14884149
        Compression metadata off heap memory used: 34918616
        Compacted partition minimum bytes: 150
        Compacted partition maximum bytes: 4866323
        Compacted partition mean bytes: 4478
        Average live cells per slice (last five minutes): NaN
        Maximum live cells per slice (last five minutes): 0
        Average tombstones per slice (last five minutes): NaN
        Maximum tombstones per slice (last five minutes): 0
        Dropped Mutations: 660

谢谢

2 个答案:

答案 0 :(得分:1)

除非有问题,否则我不会修改内存设置。仅当您以超出磁盘写入能力的速度写入或GC弄乱了时序时,它们才会真正阻塞。 “每秒10K条记录,应用程序运行24 * 7”-考虑到记录的大小不是很大,并且不会超出写入范围(一个不错的系统可以承受100k-200k / s的恒定负载),实际上并没有那么多。 nodetool tablestatstablehistograms和架构可以帮助识别记录是否太大,分区太宽,并更好地指示您的压缩策略应该是什么(可能是TWCS,但如果您有任何读物,则可能是LCS全部和分区跨越一天左右。

pending tasks in nodetool compactionstats完全不做内存设置,因为它更多的使您的压缩无法跟上。这可能就像批量作业运行时的峰值,小分区刷新或修复流稳定一样,但是如果增长而不是降低,则需要调整压缩策略。确实很大程度上取决于数据模型和统计信息(tablestats / tablehistograms)

答案 1 :(得分:0)

您可以参考此链接来调整以上参数。 http://abiasforaction.net/apache-cassandra-memtable-flush/

  

memtable_cleanup_threshold –将触发内存清理的总可用内存空间的百分比。   memtable_cleanup_threshold的默认值为1 /(memtable_flush_writers +   1)。默认情况下,这实际上是您的33%   memtable_heap_space_in_mb。计划的清理将导致刷新   占据内存最大部分的表/列族   空间。这一直持续到可用内存丢失为止   低于清除阈值。