MongoDb / C#并发读取性能差

时间:2012-04-05 16:49:17

标签: performance mongodb

我正在测试一些NoSQL解决方案,我主要关注读取性能。今天是MongoDb日。 该测试机是一台配备四核Xeon @ 2.93GHz和8GB RAM的VM。

我正在测试只有数据库和一个包含~100,000个文档的集合。 BSON文档大小约为20Kb,或多或少。

我正在使用的托管对象是:

private class Job
{
    public int Id { get; set; }
    public string OrganizationName { get; set; }
    public List<string> Categories { get; set; }
    public List<string> Industries { get; set; }
    public int Identifier { get; set; }
    public string Description { get; set; }
}

测试过程:

创建100个线程。

- 启动所有主题。

- 每个帖子从集合中读取20个随机文件。

这是我正在使用的选择方法:

private static void TestSelectWithCursor(object state)
{
    resetEvent.WaitOne();

    MongoCollection jobs = (state as MongoCollection);
    var q = jobs.AsQueryable<Job>();
    Random r = new Random(938432094);
    List<int> ids = new List<int>();
    for (int i = 0; i != 20; ++i)
    {
        ids.Add(r.Next(1000, 100000));
    }
    Stopwatch sw = Stopwatch.StartNew();
    var subset = from j in q
                 where j.Id.In(ids)
                 select j;

    int count = 0;
    foreach (Job job in subset)
    {
        count++;
    }
    Console.WriteLine("Retrieved {0} documents in {1} ms.", count, sw.ElapsedMilliseconds);
    ThreadsCount++;
}

“count ++”的东西只是假装我在检索光标后正在做某事,所以请忽略它。

无论如何,我的想法是,我觉得读取时间非常慢。这是典型的测试结果:

> 100 threads created.
> 
> Retrieved 20 documents in 272 ms. Retrieved 20 documents in 522 ms.
> Retrieved 20 documents in 681 ms. Retrieved 20 documents in 732 ms.
> Retrieved 20 documents in 769 ms. Retrieved 20 documents in 843 ms.
> Retrieved 20 documents in 1038 ms. Retrieved 20 documents in 1139 ms.
> Retrieved 20 documents in 1163 ms. Retrieved 20 documents in 1170 ms.
> Retrieved 20 documents in 1206 ms. Retrieved 20 documents in 1243 ms.
> Retrieved 20 documents in 1322 ms. Retrieved 20 documents in 1378 ms.
> Retrieved 20 documents in 1463 ms. Retrieved 20 documents in 1507 ms.
> Retrieved 20 documents in 1530 ms. Retrieved 20 documents in 1557 ms.
> Retrieved 20 documents in 1567 ms. Retrieved 20 documents in 1617 ms.
> Retrieved 20 documents in 1626 ms. Retrieved 20 documents in 1659 ms.
> Retrieved 20 documents in 1666 ms. Retrieved 20 documents in 1687 ms.
> Retrieved 20 documents in 1711 ms. Retrieved 20 documents in 1731 ms.
> Retrieved 20 documents in 1763 ms. Retrieved 20 documents in 1839 ms.
> Retrieved 20 documents in 1854 ms. Retrieved 20 documents in 1887 ms.
> Retrieved 20 documents in 1906 ms. Retrieved 20 documents in 1946 ms.
> Retrieved 20 documents in 1962 ms. Retrieved 20 documents in 1967 ms.
> Retrieved 20 documents in 1969 ms. Retrieved 20 documents in 1977 ms.
> Retrieved 20 documents in 1996 ms. Retrieved 20 documents in 2005 ms.
> Retrieved 20 documents in 2009 ms. Retrieved 20 documents in 2025 ms.
> Retrieved 20 documents in 2035 ms. Retrieved 20 documents in 2066 ms.
> Retrieved 20 documents in 2093 ms. Retrieved 20 documents in 2111 ms.
> Retrieved 20 documents in 2133 ms. Retrieved 20 documents in 2147 ms.
> Retrieved 20 documents in 2150 ms. Retrieved 20 documents in 2152 ms.
> Retrieved 20 documents in 2155 ms. Retrieved 20 documents in 2160 ms.
> Retrieved 20 documents in 2166 ms. Retrieved 20 documents in 2196 ms.
> Retrieved 20 documents in 2202 ms. Retrieved 20 documents in 2254 ms.
> Retrieved 20 documents in 2256 ms. Retrieved 20 documents in 2262 ms.
> Retrieved 20 documents in 2263 ms. Retrieved 20 documents in 2285 ms.
> Retrieved 20 documents in 2326 ms. Retrieved 20 documents in 2336 ms.
> Retrieved 20 documents in 2337 ms. Retrieved 20 documents in 2350 ms.
> Retrieved 20 documents in 2372 ms. Retrieved 20 documents in 2384 ms.
> Retrieved 20 documents in 2412 ms. Retrieved 20 documents in 2426 ms.
> Retrieved 20 documents in 2457 ms. Retrieved 20 documents in 2473 ms.
> Retrieved 20 documents in 2521 ms. Retrieved 20 documents in 2528 ms.
> Retrieved 20 documents in 2604 ms. Retrieved 20 documents in 2659 ms.
> Retrieved 20 documents in 2670 ms. Retrieved 20 documents in 2687 ms.
> Retrieved 20 documents in 2961 ms. Retrieved 20 documents in 3234 ms.
> Retrieved 20 documents in 3434 ms. Retrieved 20 documents in 3440 ms.
> Retrieved 20 documents in 3452 ms. Retrieved 20 documents in 3466 ms.
> Retrieved 20 documents in 3502 ms. Retrieved 20 documents in 3524 ms.
> Retrieved 20 documents in 3561 ms. Retrieved 20 documents in 3611 ms.
> Retrieved 20 documents in 3652 ms. Retrieved 20 documents in 3655 ms.
> Retrieved 20 documents in 3666 ms. Retrieved 20 documents in 3711 ms.
> Retrieved 20 documents in 3742 ms. Retrieved 20 documents in 3821 ms.
> Retrieved 20 documents in 3850 ms. Retrieved 20 documents in 4020 ms.
> Retrieved 20 documents in 5143 ms. Retrieved 20 documents in 6607 ms.
> Retrieved 20 documents in 6630 ms. Retrieved 20 documents in 6633 ms.
> Retrieved 20 documents in 6637 ms. Retrieved 20 documents in 6639 ms.
> Retrieved 20 documents in 6801 ms. Retrieved 20 documents in 9302 ms.

最重要的是,我希望获得比这更快的阅读时间。我还在想我做错了什么。 不确定我现在可以提供哪些其他信息,但如果遗漏了任何信息,请告诉我。

我也包括,希望它会对测试执行的其中一个查询的explain()跟踪提供帮助:

{
        "cursor" : "BtreeCursor _id_ multi",
        "nscanned" : 39,
        "nscannedObjects" : 20,
        "n" : 20,
        "millis" : 0,
        "nYields" : 0,
        "nChunkSkips" : 0,
        "isMultiKey" : false,
        "indexOnly" : false,
        "indexBounds" : {
                "_id" : [
                        [
                                3276,
                                3276
                        ],
                        [
                                8257,
                                8257
                        ],
                        [
                                11189,
                                11189
                        ],
                        [
                                21779,
                                21779
                        ],
                        [
                                22293,
                                22293
                        ],
                        [
                                23376,
                                23376
                        ],
                        [
                                28656,
                                28656
                        ],
                        [
                                29557,
                                29557
                        ],
                        [
                                32160,
                                32160
                        ],
                        [
                                34833,
                                34833
                        ],
                        [
                                35922,
                                35922
                        ],
                        [
                                39141,
                                39141
                        ],
                        [
                                49094,
                                49094
                        ],
                        [
                                54554,
                                54554
                        ],
                        [
                                67684,
                                67684
                        ],
                        [
                                76384,
                                76384
                        ],
                        [
                                85612,
                                85612
                        ],
                        [
                                85838,
                                85838
                        ],
                        [
                                91634,
                                91634
                        ],
                        [
                                99891,
                                99891
                        ]
                ]
        }
}

如果您有任何想法,那么我会非常渴望阅读它。 提前谢谢!

马塞尔

1 个答案:

答案 0 :(得分:2)

我怀疑“in”(通用修饰符)强制执行顺序扫描,并完全提取每个文档以检查where子句,从而绕过使用_id索引的效率。鉴于随机数可以分布很广,我的猜测是每个线程/查询基本上都在扫描整个数据库。

我建议尝试一些事情。 (1)通过单个单个id单独查询20个文档中的每一个 (2)考虑使用MongoCursor并使用Explain获取有关查询的索引使用的信息

祝福,

-Gary

P.S。线程时间似乎表明在工作中也存在一些线程调度效果。