Cassandra 3.9 CQL跟踪瓶颈识别

时间:2016-10-22 07:43:25

标签: amazon-ec2 cassandra cql3

我在AWS上使用3节点RF3 Cassandra集群。 我将读取请求的超时设置为10ms。我注意到我的一些要求是超时的。这是我在CQL中使用TRACING ON观察到的内容;

Tracing session: 9fc1d420-9829-11e6-b04a-834837c1747b

 activity                                                                                                          | timestamp                  | source        | source_elapsed | client
-------------------------------------------------------------------------------------------------------------------+----------------------------+---------------+----------------+-----------
                                                                                                Execute CQL3 query | 2016-10-22 10:32:02.274000 | 10.20.30.40 |              0 | 127.0.0.1
        Parsing select * from recipes where id = fcc7d8b5-46d3-4867-903c-4a5c66a1fd2e; [Native-Transport-Requests-8] | 2016-10-22 10:32:02.274000 | 10.20.30.40 |            264 | 127.0.0.1
                                                                 Preparing statement [Native-Transport-Requests-8] | 2016-10-22 10:32:02.274000 | 10.20.30.40 |            367 | 127.0.0.1
                                                        reading data from /10.20.0.1 [Native-Transport-Requests-8] | 2016-10-22 10:32:02.275000 | 10.20.30.40 |            680 | 127.0.0.1
                                         Sending READ message to /10.20.0.1 [MessagingService-Outgoing-/10.20.0.1] | 2016-10-22 10:32:02.286000 | 10.20.30.40 |          12080 | 127.0.0.1
                                  READ message received from /10.20.30.40 [MessagingService-Incoming-/10.20.30.40] | 2016-10-22 10:32:02.296000 |  10.20.0.1 |             51 | 127.0.0.1
                                                         Executing single-partition query on recipes [ReadStage-8] | 2016-10-22 10:32:02.298000 |  10.20.0.1 |           2423 | 127.0.0.1
                                                                        Acquiring sstable references [ReadStage-8] | 2016-10-22 10:32:02.298000 |  10.20.0.1 |           2481 | 127.0.0.1
                           Skipped 0/4 non-slice-intersecting sstables, included 0 due to tombstones [ReadStage-8] | 2016-10-22 10:32:02.298000 |  10.20.0.1 |           2548 | 127.0.0.1
                                                             Bloom filter allows skipping sstable 55 [ReadStage-8] | 2016-10-22 10:32:02.298000 |  10.20.0.1 |           2614 | 127.0.0.1
                                                            Bloom filter allows skipping sstable 130 [ReadStage-8] | 2016-10-22 10:32:02.298000 |  10.20.0.1 |           2655 | 127.0.0.1
                                                            Bloom filter allows skipping sstable 140 [ReadStage-8] | 2016-10-22 10:32:02.298000 |  10.20.0.1 |           2704 | 127.0.0.1
                                                            Bloom filter allows skipping sstable 141 [ReadStage-8] | 2016-10-22 10:32:02.298001 |  10.20.0.1 |           2739 | 127.0.0.1
                                                           Merged data from memtables and 4 sstables [ReadStage-8] | 2016-10-22 10:32:02.298001 |  10.20.0.1 |           2796 | 127.0.0.1
                                                                   Read 0 live and 0 tombstone cells [ReadStage-8] | 2016-10-22 10:32:02.298001 |  10.20.0.1 |           2854 | 127.0.0.1
                                                                  Enqueuing response to /10.20.30.40 [ReadStage-8] | 2016-10-22 10:32:02.299000 |  10.20.0.1 |           2910 | 127.0.0.1
                         Sending REQUEST_RESPONSE message to /10.20.30.40 [MessagingService-Outgoing-/10.20.30.40] | 2016-10-22 10:32:02.302000 |  10.20.0.1 |           6045 | 127.0.0.1
                          REQUEST_RESPONSE message received from /10.20.0.1 [MessagingService-Incoming-/10.20.0.1] | 2016-10-22 10:32:02.322000 | 10.20.30.40 |          47911 | 127.0.0.1
                                                     Processing response from /10.20.0.1 [RequestResponseStage-42] | 2016-10-22 10:32:02.322000 | 10.20.30.40 |          48056 | 127.0.0.1
                                                                                                  Request complete | 2016-10-22 10:32:02.323239 | 10.20.30.40 |          49239 | 127.0.0.1

在查看DataStax documentation时,似乎source_elapsed列是在源节点上发生事件之前经过的时间(以微秒为单位)。

从/10.20.30.40发送REQUEST_RESPONSE消息和从/10.20.0.1收到REQUEST_RESPONSE消息之间存在很大的时间差。

这是否表示存在网络延迟问题?

0 个答案:

没有答案