使用Cassandra,为什么1000 op / sec会在下面的布局中产生400MB / s的I / O?

时间:2016-06-14 16:13:37

标签: performance cassandra performance-testing

这是一个读取繁重的工作负载(85%读取,15%写入)在3.3亿条记录数据集上,我从SATA SSD(最大化控制器)拉动450MB /秒,但只获得~1000 OPS /秒​​。我不认为它应该做那么多的I / O,但我是Cassandra的新手,并且真的不明白为什么会这样做。

Keyspace: stresscql
    Read Count: 81263974
    Read Latency: 37.271172120563534 ms.
    Write Count: 18576806
    Write Latency: 0.01787069736315274 ms.
    Pending Flushes: 0
            Table: u
            SSTable count: 251
            SSTables in each level: [3, 0, 0, 0, 0, 0, 0, 0, 0]
            Space used (live): 1712307458381
            Space used (total): 1712307458381
            Space used by snapshots (total): 0
            Off heap memory used (total): 3358807452
            SSTable Compression Ratio: 0.6798834918759787
            Number of keys (estimate): 3354476152
            Memtable cell count: 0
            Memtable data size: 0
            Memtable off heap memory used: 0
            Memtable switch count: 10
            Local read count: 81263974
            Local read latency: 40.886 ms
            Local write count: 18576806
            Local write latency: 0.020 ms
            Pending flushes: 0
            Bloom filter false positives: 0
            Bloom filter false ratio: 0.00000
            Bloom filter space used: 2097321112
            Bloom filter off heap memory used: 2715609520
            Index summary off heap memory used: 633723580
            Compression metadata off heap memory used: 9474352
            Compacted partition minimum bytes: 51
            Compacted partition maximum bytes: 72
            Compacted partition mean bytes: 72
            Average live cells per slice (last five minutes): 1.0000004430007106
            Maximum live cells per slice (last five minutes): 10
            Average tombstones per slice (last five minutes): 1.0
            Maximum tombstones per slice (last five minutes): 1



### DML ###

keyspace: stresscql

keyspace_definition: |
   CREATE KEYSPACE stresscql WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1};

table: u

table_definition: |
  CREATE TABLE IF NOT EXISTS u (
    c uuid,
    f bigint,
    i boolean,
    bl boolean,
    t timestamp,
    PRIMARY KEY(c,f, i)
  ) WITH compaction = { 'class': 'SizeTieredCompactionStrategy' }
  AND compression = { 'crc_check_chance' : 1.0, 'sstable_compression' : 'SnappyCompressor' }

### Column Distribution Specifications ###

columnspec:
  - name: c

  - name: f

  - name: i

  - name: bl

  - name: t

### Batch Ratio Distribution Specifications ###

insert:
  partitions: fixed(1)
  select:    fixed(1)/1
  batchtype: UNLOGGED

queries:
   selectuser:
      cql: select * from u where c = ? and f = ? and i = ?
      fields: samerow
   newselect:
      cql: select bl from u where c = ? and f = ? and i = ?

还尝试了LeveledCompactionStrategy,这就是我得到的 - 仍然明显受到I / O的限制:

This row would've been cached (not using row-cache, but whatever other caching happens), since I had to select it to get the guid value for 'c'.

cqlsh> select * from stresscql.u where c = 2953e0ef-44fd-44fb-8d3e-8a2498ae694f;

 c                                    | f           | i    | bl   | t
--------------------------------------+-------------+------+------+--------------------------
 2953e0ef-44fd-44fb-8d3e-8a2498ae694f | 58180117557 | True | True | 2016-05-07 09:43:07+0000

(1 rows)

Tracing session: dd78f580-32ff-11e6-8746-45b1db7c6f51

 activity                                                                                                                                  | timestamp                  | source       | source_elapsed
-------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+--------------+----------------
                                                                                                                        Execute CQL3 query | 2016-06-15 15:48:39.512000 | 10.7.137.139 |              0
 Parsing select * from stresscql.u where c = 2953e0ef-44fd-44fb-8d3e-8a2498ae694f; [SharedPool-Worker-1] | 2016-06-15 15:48:39.512000 | 10.7.137.139 |             92
                                                                                                 Preparing statement [SharedPool-Worker-1] | 2016-06-15 15:48:39.512000 | 10.7.137.139 |            208
                                             Executing single-partition query on u [SharedPool-Worker-2] | 2016-06-15 15:48:39.513000 | 10.7.137.139 |           1626
                                                                                        Acquiring sstable references [SharedPool-Worker-2] | 2016-06-15 15:48:39.513000 | 10.7.137.139 |           1677
                                                              Partition index with 0 entries found for sstable 15405 [SharedPool-Worker-2] | 2016-06-15 15:48:39.514000 | 10.7.137.139 |           1944
                                           Skipped 0/1 non-slice-intersecting sstables, included 0 due to tombstones [SharedPool-Worker-2] | 2016-06-15 15:48:39.514000 | 10.7.137.139 |           2401
                                                                          Merging data from memtables and 1 sstables [SharedPool-Worker-2] | 2016-06-15 15:48:39.514000 | 10.7.137.139 |           2445
                                                                                                                          Request complete | 2016-06-15 15:48:39.543778 | 10.7.137.139 |          31778


Keyspace: stresscql
        Read Count: 115672739
        Read Latency: 37.24272796733031 ms.
        Write Count: 25113006
        Write Latency: 0.016802926260599788 ms.
        Pending Flushes: 0
                Table: user_identified_caller_by_telephone
                SSTable count: 847
                SSTables in each level: [1, 10, 100, 736, 0, 0, 0, 0, 0]
                Space used (live): 2032182537714
                Space used (total): 2032182537714
                Space used by snapshots (total): 0
                Off heap memory used (total): 2540382332
                SSTable Compression Ratio: 0.6868674726322842
                Number of keys (estimate): 3361569503
                Memtable cell count: 1162671
                Memtable data size: 42639539
                Memtable off heap memory used: 0
                Memtable switch count: 12
                Local read count: 115672739
                Local read latency: 40.854 ms
                Local write count: 25113006
                Local write latency: 0.019 ms
                Pending flushes: 0
                Bloom filter false positives: 1
                Bloom filter false ratio: 0.00000
                Bloom filter space used: 2115703120
                Bloom filter off heap memory used: 2115696344
                Index summary off heap memory used: 418366956
                Compression metadata off heap memory used: 6319032
                Compacted partition minimum bytes: 51
                Compacted partition maximum bytes: 72
                Compacted partition mean bytes: 72
                Average live cells per slice (last five minutes): 1.0000007348317013
                Maximum live cells per slice (last five minutes): 50
                Average tombstones per slice (last five minutes): 1.0
                Maximum tombstones per slice (last five minutes): 1

----------------

1 个答案:

答案 0 :(得分:0)

我可能正在读错了,但我认为你正在创建一个巨大的分区,这将是非常缓慢的压缩。在像CASSANDRA-11206这样的门票中的新版本(3.6,3.7)中有很多这样的,所以一定要使用最新版本,因为最近有一些改进会有所帮助。您可能希望更改压力配置文件中的partitions: fixed(1)select,以便更好地分发数据。

可能会在压缩方面陷入困境(低默认吞吐量),导致每次读取都要触及很多sstables。 251对于STCS来说很多,因为宽分区可能需要从每个分区读取。

使用该数据模型和工作负载(压力配置文件问题除外)应该尝试LeveledCompactionStrategy并提高压缩吞吐量(可以使用nodetool setcompactionthroughput测试不同,我会从64开始,但可以上下移动看看它如何影响你的工作量)。压缩将需要更多的IO,但随后您的读取将花费更少,这对您的工作负载更好。