KyotoCabinet(TreeDB)严重降低性能

时间:2014-03-10 23:11:52

标签: database go kyotocabinet

我选择TreeDB作为京都内阁的后端,希望它能够扩展到巨大的价值。不幸的是,有一个问题:

# ./kyotobench
Generated string length: 1024
1000 records, type t 74.008887ms throughput: 13511 /sec
2000 records, type t 145.390096ms throughput: 13756 /sec
4000 records, type t 290.13486ms throughput: 13786 /sec
8000 records, type t 584.46691ms throughput: 13687 /sec
16000 records, type t 1.150792756s throughput: 13903 /sec
32000 records, type t 2.134860729s throughput: 14989 /sec
64000 records, type t 4.378002268s throughput: 14618 /sec
128000 records, type t 9.41012632s throughput: 13602 /sec
256000 records, type t 20.457090225s throughput: 12513 /sec
512000 records, type t 45.934115353s throughput: 11146 /sec
1024000 records, type t 1m39.120917207s throughput: 10330 /sec
2048000 records, type t 3m41.720146906s throughput: 9236 /sec
4096000 records, type t 15m26.041653712s throughput: 4423 /sec
8192000 records, type t 5h5m31.431477812s throughput: 446 /sec

我打开一个TreeDB,生成2个随机长度的随机字符串(0<len<1024)并分别将它们用作键和值。代码:

http://pastebin.com/0HwHPXFq

这是什么原因?

更新:

我之前应该澄清一下,我不是在精确测量KyotoDB吞吐量之后,而是试图测试KDB的可扩展性,即r / w吞吐量如何随着db中密钥数量的增加而变化,即摊销添加/阅读记录的费用。

创建1个随机字符串是分摊O(1),N个随机字符串的创建是摊销O(N)。只要每1 DB操作有一定数量的随机字符串创建,它所施加的惩罚就每秒组合操作而言是恒定的,因此它对每秒的数据库操作没有摊销影响

我测量了随机字符串创建的吞吐量

1000 strings, type t 65.380289ms throughput: 15295 /sec
2000 strings, type t 130.345234ms throughput: 15343 /sec
4000 strings, type t 259.886865ms throughput: 15391 /sec
8000 strings, type t 519.380392ms throughput: 15402 /sec
16000 strings, type t 1.040323537s throughput: 15379 /sec
32000 strings, type t 1.855234924s throughput: 17248 /sec
64000 strings, type t 3.709873467s throughput: 17251 /sec
128000 strings, type t 7.371360742s throughput: 17364 /sec
256000 strings, type t 14.705493792s throughput: 17408 /sec
512000 strings, type t 29.488131398s throughput: 17362 /sec
1024000 strings, type t 59.46313568s throughput: 17220 /sec
2048000 strings, type t 1m58.688153868s throughput: 17255 /sec
4096000 strings, type t 3m57.415585291s throughput: 17252 /sec
8192000 strings, type t 7m57.054025376s throughput: 17172 /sec

代码:http://pastebin.com/yfVXYbSt

正如可以预料的那样,成本是O(n)。比较时间,例如创建随机字符串时为8192000条记录为8分钟,写入db时为5h5m。

更新#2:

这似乎与唯一/碰撞键有关。在此代码中:http://pastie.org/8906676我以类似于此处使用的方法的方式使用了键和值:http://blog.creapptives.com/post/8330476086/leveldb-vs-kyoto-cabinet-my-findingshttp://www.pastie.org/2295228),即生成带有线性递增整数后缀的“key”(“ key1“,”key2“等)。

(更新的代码也使用每50,000次写入的事务,这似乎有一些影响)

现在吞吐量降低很慢(如果实际存在的话):

4000 records, type t 10.220836ms throughput: 391357 /sec
8000 records, type t 18.113652ms throughput: 441655 /sec
16000 records, type t 36.6948ms throughput: 436029 /sec
32000 records, type t 74.048029ms throughput: 432151 /sec
64000 records, type t 148.585114ms throughput: 430729 /sec
128000 records, type t 303.646709ms throughput: 421542 /sec
256000 records, type t 633.831383ms throughput: 403892 /sec
512000 records, type t 1.297555153s throughput: 394588 /sec
1024000 records, type t 2.471077696s throughput: 414394 /sec
2048000 records, type t 5.970116441s throughput: 343041 /sec
4096000 records, type t 11.449808222s throughput: 357735 /sec
8192000 records, type t 23.142591222s throughput: 353979 /sec
16384000 records, type t 46.90204795s throughput: 349323 /sec

再一次,请看吞吐量的趋势,而不是绝对值。

理论上TreeDB是B +树,所以写一条记录应该是~O(log n)。

但事实并非如此。它看起来好像在某处有哈希冲突。

2 个答案:

答案 0 :(得分:0)

您正在对RandStrings进行基准测试,这并不奇怪,它非常慢。例如,这需要多长时间才能运行?

package main

import (
    "fmt"
    "math/rand"
)

const chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890 abcdefghijklmnopqrstuvwxyz" +
    "~!@#$%^&*()-_+={}[]\\|<,>.?/\"';:`"

const Maxlen = 1024

func RandStrings(N int) []string {
    r := make([]string, N)
    ri := 0
    buf := make([]byte, Maxlen)
    known := map[string]bool{}

    for i := 0; i < N; i++ {
    retry:
        l := rand.Intn(Maxlen)
        for j := 0; j < l; j++ {
            buf[j] = chars[rand.Intn(len(chars))]
        }
        s := string(buf[0:l])
        if known[s] {
            goto retry
        }
        known[s] = true
        r[ri] = s
        ri++
    }
    return r
}

func runbench(t string, n int) {
    for i := 0; i < n; i++ {
        r := RandStrings(2)
        _ = r
    }
}

func main() {
    iter := 64000000
    incr := 1000
    for i := incr; i < iter+1; i = incr {
        runbench("t", i)
        incr = 2 * i
    }
}

改编自http://pastebin.com/0HwHPXFq

答案 1 :(得分:0)

在开始测量时间之前,准备好基准之外的随机字符串。

此外,您将文件打开,数据库打开,数据库关闭和文件删除计为基准测试的一部分。所有这些意味着您不太可能以任何精度测量db.Set(k, v)的性能。

首先生成iter随机字符串,然后使用基准测试循环中的字符串重试基准。

type Pair struct { key, value string }

var randString = make([]Pair, iter)

func setupRandomPairs() {
    known := make(map[string]bool)
    for i := range randString {
        randString[i] = Pair {
            key:   genRandomString(known),
            value: genRandomString(known),
        }
    }
}

然后在你的基准代码中:

setupRandomPairs()
// start timing
for _, pair := range randString {
    db.Set(pair.key, pair.value)
}
// stop timing
cleanup()