我们目前正在使用以下表格架构测试Cassandra:
CREATE TABLE coreglead_v2.stats_by_site_user (
d_tally text, -- ex.: '2016-01', '2016-02', etc..
site_id int,
d_date timestamp,
site_user_id int,
accepted counter,
error counter,
impressions_negative counter,
impressions_positive counter,
rejected counter,
revenue counter,
reversals_rejected counter,
reversals_revenue counter,
PRIMARY KEY (d_tally, site_id, d_date, site_user_id)
) WITH CLUSTERING ORDER BY (site_id ASC, d_date ASC, site_user_id ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
为了我们的测试目的,我们编写了一个python脚本,可以在2016日历(总共12个月)中随机化数据,我们希望我们的分区键同时为 d_tally 列,我们希望我们的密钥数量为12(从' 2016-01'到' 2016-12')。
运行nodetool cfstats会向我们展示以下内容:
Table: stats_by_site_user
SSTable count: 4
Space used (live): 131977793
Space used (total): 131977793
Space used by snapshots (total): 0
Off heap memory used (total): 89116
SSTable Compression Ratio: 0.18667406304929424
Number of keys (estimate): 24
Memtable cell count: 120353
Memtable data size: 23228804
Memtable off heap memory used: 0
Memtable switch count: 10
Local read count: 169
Local read latency: 1.938 ms
Local write count: 4912464
Local write latency: 0.066 ms
Pending flushes: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.00000
Bloom filter space used: 128
Bloom filter off heap memory used: 96
Index summary off heap memory used: 76
Compression metadata off heap memory used: 88944
Compacted partition minimum bytes: 5839589
Compacted partition maximum bytes: 43388628
Compacted partition mean bytes: 16102786
Average live cells per slice (last five minutes): 102.91627247589237
Maximum live cells per slice (last five minutes): 103
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1
令我们困惑的是"键数(估计值):24"部分。查看我们的模式并假设我们的测试数据(超过500万次写入)仅由2016年数据组成,24个密钥估计来自何处?
以下是我们数据的示例:
d_tally | site_id | d_date | site_user_id | accepted | error | impressions_negative | impressions_positive | rejected | revenue | reversals_rejected | reversals_revenue
---------+---------+--------------------------+--------------+----------+-------+----------------------+----------------------+----------+---------+--------------------+-------------------
2016-01 | 1 | 2016-01-01 00:00:00+0000 | 240054 | 1 | null | null | 1 | null | 553 | null | null
2016-01 | 1 | 2016-01-01 00:00:00+0000 | 1263968 | 1 | null | null | 1 | null | 1093 | null | null
2016-01 | 1 | 2016-01-01 00:00:00+0000 | 1267841 | 1 | null | null | 1 | null | 861 | null | null
2016-01 | 1 | 2016-01-01 00:00:00+0000 | 1728725 | 1 | null | null | 1 | null | 425 | null | null
答案 0 :(得分:2)
键的数量是估计值(尽管应该非常接近)。它描绘了每个sstable的数据草图,并将它们合并在一起以估计基数(hyperloglog)。
不幸的是,memtable中不存在等价物,因此它将memtable的基数添加到sstable估计值中。这意味着memtables和sstables中的东西都被重复计算。这就是您看到24
而不是12
的原因。