我正在使用YCSB对许多不同的NoSQL数据库进行基准测试。但是,在处理客户端线程数时,我很难解释吞吐量与延迟结果。
例如,当用16个客户端线程对cassandra运行工作负载a(50/50读取和更新)进行基准测试时,将执行以下命令:
bin/ycsb run cassandra-cql -p hosts=xx.xx.xx.xx -p recordcount=525600 -p operationcount=525600 -threads 16 -P workloads/workloada -s > workloada_525600_16_threads_run_res.txt
给出以下输出:
[OVERALL], RunTime(ms), 62751
[OVERALL], Throughput(ops/sec), 8375.962136061577
[TOTAL_GCS_PS_Scavenge], Count, 64
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 289
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.46055042947522745
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 64
[TOTAL_GC_TIME], Time(ms), 289
[TOTAL_GC_TIME_%], Time(%), 0.46055042947522745
[READ], Operations, 262650
[READ], AverageLatency(us), 1844.6075042832667
[READ], MinLatency(us), 290
[READ], MaxLatency(us), 116159
[READ], 95thPercentileLatency(us), 3081
[READ], 99thPercentileLatency(us), 7551
[READ], Return=OK, 262650
[CLEANUP], Operations, 16
[CLEANUP], AverageLatency(us), 139458.5
[CLEANUP], MinLatency(us), 1
[CLEANUP], MaxLatency(us), 2232319
[CLEANUP], 95thPercentileLatency(us), 19
[CLEANUP], 99thPercentileLatency(us), 2232319
[UPDATE], Operations, 262950
[UPDATE], AverageLatency(us), 1764.8220193953223
[UPDATE], MinLatency(us), 208
[UPDATE], MaxLatency(us), 95807
[UPDATE], 95thPercentileLatency(us), 2901
[UPDATE], 99thPercentileLatency(us), 7031
[UPDATE], Return=OK, 262950
使用32个线程运行相同的操作:
[OVERALL], RunTime(ms), 51785
[OVERALL], Throughput(ops/sec), 10149.65723665154
[TOTAL_GCS_PS_Scavenge], Count, 124
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 310
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.5986289466061601
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 124
[TOTAL_GC_TIME], Time(ms), 310
[TOTAL_GC_TIME_%], Time(%), 0.5986289466061601
[READ], Operations, 262848
[READ], AverageLatency(us), 2947.844628834916
[READ], MinLatency(us), 363
[READ], MaxLatency(us), 194559
[READ], 95thPercentileLatency(us), 5079
[READ], 99thPercentileLatency(us), 11055
[READ], Return=OK, 262848
[CLEANUP], Operations, 32
[CLEANUP], AverageLatency(us), 69601.5625
[CLEANUP], MinLatency(us), 1
[CLEANUP], MaxLatency(us), 2228223
[CLEANUP], 95thPercentileLatency(us), 3
[CLEANUP], 99thPercentileLatency(us), 2228223
[UPDATE], Operations, 262752
[UPDATE], AverageLatency(us), 2881.930485781269
[UPDATE], MinLatency(us), 316
[UPDATE], MaxLatency(us), 203391
[UPDATE], 95thPercentileLatency(us), 4987
[UPDATE], 99thPercentileLatency(us), 10711
[UPDATE], Return=OK, 262752
总体运行时间较短,因此吞吐量较高,但是延迟也较高。
我不太确定如何解释这些结果,以及如何找到要运行的“适当”数量的客户端线程?
答案 0 :(得分:4)
为了拥有合格的基准,您应该首先定义系统要达到的SLA要求。
假设您的工作负载模式为50/50 WR / RD,而您的SLA要求为10K ops / sec吞吐量,且第99个百分位数的延迟小于10毫秒。使用YCSB library(iNEXT)
library(ggplot2)
data(spider)
out <- iNEXT(spider$Girdled, q=0, datatype="abundance")
spider_plot <- ggiNEXT(out, type=1, facet.var="site", color.var="order")
# add ggplot2 adjustments via labs
spider_plot + labs(x = "my x label text", y = "my y label text")
标志来生成所需的吞吐量,并使用各种线程数来查看哪个可以满足您的SLA需求。
从某种意义上说,当使用更多线程时,吞吐量增加(更多操作/秒),但这是以等待时间为代价的。 您应该查看相关的数据库指标以尝试找到瓶颈-可能是:
客户端(需要更强大的客户端,或者使用更少的线程但需要更多的客户端来提供更好的并行性)
网络
数据库服务器(磁盘/ RAM-使用更强大的实例)。
您可以了解有关数据库基准测试here
的注意事项的更多信息