Cassandra查询时间长,并且在完全约束键时添加到memtable

时间:2017-09-06 12:46:36

标签: cassandra cassandra-3.0

我有一张Cassandra表,键看起来像这样:

  

PRIMARY KEY((“k1”,“k2”),“c1”,“c2”),)具有聚类顺序   (“c1”DESC,“c2”DESC);

当我完全约束查询时,它比我省略最后一个聚类键需要更长的时间。它还预先形成“添加到饲料记忆”,无约束查询不会。为什么是这样?我知道以前这个查询不会将条目添加到memtable中,因为我已经将事物添加到memtable中时运行了自定义代码。此代码应仅在插入或修改内容时运行,但在我仅查询项目时开始运行。

编辑:我应该提到两个查询返回1行并且它是相同的记录。

  activity                                                                                                                                                                          | timestamp                  | source        | source_elapsed | client
 -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+---------------+----------------+------------
                                                                                                                                                                 Execute CQL3 query | 2017-09-05 18:09:37.456000 | **.***.**.237 |              0 | ***.**.*.4
                                              Parsing select c2 from feed where k1 = 'AAA' and k2 = 'BBB' and c1 = '2017-09-05T16:09:00.222Z' and c2 = 'CCC'; [SharedPool-Worker-1] | 2017-09-05 18:09:37.456000 | **.***.**.237 |            267 | ***.**.*.4
                                                                                                                                          Preparing statement [SharedPool-Worker-1] | 2017-09-05 18:09:37.456000 | **.***.**.237 |            452 | ***.**.*.4
                                                                                                                     Executing single-partition query on feed [SharedPool-Worker-3] | 2017-09-05 18:09:37.457000 | **.***.**.237 |           1253 | ***.**.*.4
                                                                                                                                 Acquiring sstable references [SharedPool-Worker-3] | 2017-09-05 18:09:37.457000 | **.***.**.237 |           1312 | ***.**.*.4
                                                                                                                                    Merging memtable contents [SharedPool-Worker-3] | 2017-09-05 18:09:37.457000 | **.***.**.237 |           1370 | ***.**.*.4
                                                                                                                                 Key cache hit for sstable 22 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463000 | **.***.**.237 |           6939 | ***.**.*.4
                                                                                                                                 Key cache hit for sstable 21 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463000 | **.***.**.237 |           7077 | ***.**.*.4
                                                                                                                                 Key cache hit for sstable 12 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463000 | **.***.**.237 |           7137 | ***.**.*.4
                                                                                                                                  Key cache hit for sstable 6 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463000 | **.***.**.237 |           7194 | ***.**.*.4
                                                                                                                                  Key cache hit for sstable 3 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463000 | **.***.**.237 |           7249 | ***.**.*.4
                                                                                                                                 Merging data from sstable 10 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463000 | **.***.**.237 |           7362 | ***.**.*.4
                                                                                                                                 Key cache hit for sstable 10 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463001 | **.***.**.237 |           7429 | ***.**.*.4
                                                                                                                                  Key cache hit for sstable 9 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463001 | **.***.**.237 |           7489 | ***.**.*.4
                                                                                                                                  Key cache hit for sstable 4 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463001 | **.***.**.237 |           7628 | ***.**.*.4
                                                                                                                                  Key cache hit for sstable 7 [SharedPool-Worker-3] | 2017-09-05 18:09:37.463001 | **.***.**.237 |           7720 | ***.**.*.4
                                                                                                                                 Defragmenting requested data [SharedPool-Worker-3] | 2017-09-05 18:09:37.463001 | **.***.**.237 |           7779 | ***.**.*.4
                                                                                                                                      Adding to feed memtable [SharedPool-Worker-4] | 2017-09-05 18:09:37.464000 | **.***.**.237 |           7896 | ***.**.*.4
                                                                                                                            Read 1 live and 4 tombstone cells [SharedPool-Worker-3] | 2017-09-05 18:09:37.464000 | **.***.**.237 |           7932 | ***.**.*.4
                                                                                                                                                                   Request complete | 2017-09-05 18:09:37.464092 | **.***.**.237 |           8092 | ***.**.*.4

activity                                                                                                                                              | timestamp                  | source        | source_elapsed | client
-------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+---------------+----------------+------------
                                                                                                                                    Execute CQL3 query | 2017-09-05 18:09:44.703000 | **.***.**.237 |              0 | ***.**.*.4
                                Parsing select c2 from feed where k1 = 'AAA' and k2 = 'BBB' and c1 = '2017-09-05T16:09:00.222Z'; [SharedPool-Worker-1] | 2017-09-05 18:09:44.704000 | **.***.**.237 |            508 | ***.**.*.4
                                                                                                             Preparing statement [SharedPool-Worker-1] | 2017-09-05 18:09:44.704000 | **.***.**.237 |            717 | ***.**.*.4
                                                                                        Executing single-partition query on feed [SharedPool-Worker-2] | 2017-09-05 18:09:44.704000 | **.***.**.237 |           1377 | ***.**.*.4
                                                                                                    Acquiring sstable references [SharedPool-Worker-2] | 2017-09-05 18:09:44.705000 | **.***.**.237 |           1499 | ***.**.*.4
                                                                                                    Key cache hit for sstable 10 [SharedPool-Worker-2] | 2017-09-05 18:09:44.705000 | **.***.**.237 |           1730 | ***.**.*.4
                                                       Skipped 8/9 non-slice-intersecting sstables, included 5 due to tombstones [SharedPool-Worker-2] | 2017-09-05 18:09:44.705000 | **.***.**.237 |           1804 | ***.**.*.4
                                                                                                    Key cache hit for sstable 22 [SharedPool-Worker-2] | 2017-09-05 18:09:44.705000 | **.***.**.237 |           1858 | ***.**.*.4
                                                                                                    Key cache hit for sstable 21 [SharedPool-Worker-2] | 2017-09-05 18:09:44.705000 | **.***.**.237 |           1908 | ***.**.*.4
                                                                                                    Key cache hit for sstable 12 [SharedPool-Worker-2] | 2017-09-05 18:09:44.705000 | **.***.**.237 |           1951 | ***.**.*.4
                                                                                                     Key cache hit for sstable 6 [SharedPool-Worker-2] | 2017-09-05 18:09:44.705001 | **.***.**.237 |           2002 | ***.**.*.4
                                                                                                     Key cache hit for sstable 3 [SharedPool-Worker-2] | 2017-09-05 18:09:44.705001 | **.***.**.237 |           2037 | ***.**.*.4
                                                                                       Merged data from memtables and 6 sstables [SharedPool-Worker-2] | 2017-09-05 18:09:44.705001 | **.***.**.237 |           2252 | ***.**.*.4
                                                                                               Read 1 live and 4 tombstone cells [SharedPool-Worker-2] | 2017-09-05 18:09:44.705001 | **.***.**.237 |           2307 | ***.**.*.4
                                                                                                                                      Request complete | 2017-09-05 18:09:44.705458 | **.***.**.237 |           2458 | ***.**.*.4
cqlsh> show version [cqlsh 5.0.1 | Cassandra 3.7 | CQL spec 3.4.2 |
Native protocol v4]

2 个答案:

答案 0 :(得分:6)

这是一个很好的问题,你(帮助)提供了我们回答它所需的所有信息!

您的第一个查询是点查找(因为您指定了两个群集键)。第二个是切片。

如果我们查看跟踪,跟踪的明显区别是:

Skipped 8/9 non-slice-intersecting sstables, included 5 due to tombstones

这是一个非常好的提示,我们正在采用两种不同的读取路径。您可以使用它来编码潜水,但长话短说,the filter you use for your point read means you'll query the memtable/sstables in different order - 对于点读取,我们按时间戳排序,对于切片,我们将尝试首先消除不相交的sstables。

代码中的注释提示 - 第一个:

/**
 * Do a read by querying the memtable(s) first, and then each relevant sstables sequentially by order of the sstable
 * max timestamp.
 *
 * This is used for names query in the hope of only having to query the 1 or 2 most recent query and then knowing nothing
 * more recent could be in the older sstables (which we can only guarantee if we know exactly which row we queries, and if
 * no collection or counters are included).
 * This method assumes the filter is a {@code ClusteringIndexNamesFilter}.
 */

第二个:

    /*
     * We have 2 main strategies:
     *   1) We query memtables and sstables simulateneously. This is our most generic strategy and the one we use
     *      unless we have a names filter that we know we can optimize futher.
     *   2) If we have a name filter (so we query specific rows), we can make a bet: that all column for all queried row
     *      will have data in the most recent sstable(s), thus saving us from reading older ones. This does imply we
     *      have a way to guarantee we have all the data for what is queried, which is only possible for name queries
     *      and if we have neither collections nor counters (indeed, for a collection, we can't guarantee an older sstable
     *      won't have some elements that weren't in the most recent sstables, and counters are intrinsically a collection
     *      of shards so have the same problem).
     */

在您的情况下,如果返回的行恰好在memtable中,则第一次(点)读取会更快。此外,由于您有8个sstables,您可能正在使用STCS或TWCS - 如果您使用LCS,则可能是您将该分区压缩为~5 sstables,并且您(再次)具有更可预测的读取性能。

  

我之前知道这个查询不会将条目添加到memtable中,因为我已经将事情添加到memtable中时运行了自定义代码。此代码应仅在插入或修改内容时运行,但在我仅查询项目时开始运行。

默认情况下,读取路径都不应该向memtable添加任何内容,除非您正在阅读修复(即,除非副本之间的值不匹配,或触发后台读取修复机会)。请注意,切片查询比点查询更可能不匹配,因为它是基于扫描的 - 您将使用匹配值c1 = '2017-09-05T16:09:00.222Z'读取修复任何/所有删除标记(逻辑删除)

编辑:我错过了跟踪中的一行:

Defragmenting requested data

这表明您正在使用STCS并触及太多的sstables,因此将整个分区复制到memtable中以使将来的读取速度更快。当你开始触摸太多的sstables时,这在STCS中是一个鲜为人知的优化,你可以使用LCS来解决它。

答案 1 :(得分:0)

您正在将苹果与橙子进行比较

  • 您要求所有匹配条件的行的第一个查询 public static void ProcessQueueMessage([QueueTrigger("queue3")] string thumbnail, TextWriter log) { string instanceid = Environment.GetEnvironmentVariable("WEBSITE_INSTANCE_ID"); log.Write( "Current instance ID : " + instanceid); } 这里的额外条件是c2 =' CCC'所以cassandra需要在返回符合这些条件的行中做更多的工作。

  • 在第二个查询中,您放松了在c2上匹配的条件,因此您可以看到不同的性能行为。

假设您有1000行符合条件k1 =' AAA'和k2 =' BBB'和c1 =' 2017-09-05T16:09:00.222Z'。 添加c2的条件可能只返回4行(可能需要检查c2条件的所有行),因为删除条件将在匹配k1,k2和c1后开始流式传输结果。

  • 如果你真的想比较,你可以比较
  • 之间的表现

k1 = 'AAA' and k2 = 'BBB' and c1 = '2017-09-05T16:09:00.222Z' and c2 = 'CCC'

此外,在检查性能时,您需要多次运行相同的查询以避免任何缓存行为。