所以我运行这个命令:
$ redis-cli --intrinsic-latency 100
... some lines ...
11386032 total runs (avg latency: 8.7827 microseconds / 87826.91 nanoseconds per run).
Worst run took 5064x longer than the average latency.
此报告中的问题是87826.91纳秒不等于至8.7827微秒。
正确答案为8782.69纳秒
关于版本:
$ redis-cli -v
redis-cli 3.0.5
$ redis-server -v
Redis server v=3.0.5 sha=00000000:0 malloc=jemalloc-3.6.0 bits=64 build=9e32aff68ca15a3f
答案 0 :(得分:1)
在redis-cli.c中有以下代码:
static void intrinsicLatencyMode(void) {
.......
double avg_us = (double)run_time/runs;
double avg_ns = avg_us * 10e3;
if (force_cancel_loop || end > test_end) {
printf("\n%lld total runs "
"(avg latency: "
"%.4f microseconds / %.2f nanoseconds per run).\n",
runs, avg_us, avg_ns);
printf("Worst run took %.0fx longer than the average latency.\n",
max_latency / avg_us);
exit(0);
}
问题在于将微秒转换为纳秒的行:
double avg_ns = avg_us * 10e3;
代替10e3,代码应该使用1e3:
>gdb -q
(gdb) print 10e3
$1 = 10000
(gdb) print 1e3
$2 = 1000
(gdb)