为什么C ++线程/未来开销如此之大

时间:2018-06-01 05:33:20

标签: c++ multithreading asynchronous

我有一个工作例程(下面的代码),当我在一个单独的线程中运行它时运行速度较慢。据我所知,工作者代码和数据完全独立于其他线程。所有工作人员都将节点附加到树上。目标是让多个工人同时种树。

有人可以帮助我理解为什么在单独的线程中运行worker时会有(重大)开销吗?

修改: 最初我测试了WorkerFuture两次,我纠正了这一点,现在我在无线程和延迟异步情况下得到相同(更好)的性能,并且在涉及额外线程时会产生相当大的开销。

编译命令(linux):g ++ -std = c ++ 11 main.cpp -o main -O3 -pthread

这是输出(以毫秒为单位的时间):

Thread     : 4000001 size in 1861 ms
Async      : 4000001 size in 1836 ms
Defer async: 4000001 size in 1423 ms
No thread  : 4000001 size in 1455 ms

代码:

#include <iostream>
#include <vector>
#include <random>
#include <chrono>
#include <thread>
#include <future>

struct Data
{
    int data;
};

struct Tree
{
    Data data;
    long long total;
    std::vector<Tree *> children;

    long long Size()
    {
        long long size = 1;
        for (auto c : children)
            size += c->Size();
        return size;
    }

    ~Tree()
    {
        for (auto c : children)
            delete c;
    }
};

int
GetRandom(long long size)
{
    static long long counter = 0;
    return counter++ % size;
}

void
Worker_(Tree *root)
{
    std::vector<Tree *> nodes = {root};
    Tree *it = root;
    while (!it->children.empty())
    {
        it = it->children[GetRandom(it->children.size())];
        nodes.push_back(it);
    }
    for (int i = 0; i < 100; ++i)
        nodes.back()->children.push_back(new Tree{{10}, 1, {}});
    for (auto t : nodes)
        ++t->total;
}

long long
Worker(long long iterations)
{
    Tree root = {};
    for (long long i = 0; i < iterations; ++i)
        Worker_(&root);
    return root.Size();
}

void ThreadFn(long long iterations, long long &result)
{
    result = Worker(iterations);
}

long long
WorkerThread(long long iterations)
{
    long long result = 0;
    std::thread t(ThreadFn, iterations, std::ref(result));
    t.join();
    return result;
}

long long
WorkerFuture(long long iterations)
{
    std::future<long long> f = std::async(std::launch::async, [iterations] {
        return Worker(iterations);
    });

    return f.get();
}

long long
WorkerFutureSameThread(long long iterations)
{
    std::future<long long> f = std::async(std::launch::deferred, [iterations] {
        return Worker(iterations);
    });

    return f.get();
}

int main()
{
    long long iterations = 40000;

    auto t1 = std::chrono::high_resolution_clock::now();
    auto total = WorkerThread(iterations);
    auto t2 = std::chrono::high_resolution_clock::now();
    std::cout << "Thread     : " << total << " size in " << std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count() << " ms\n";

    t1 = std::chrono::high_resolution_clock::now();
    total = WorkerFuture(iterations);
    t2 = std::chrono::high_resolution_clock::now();
    std::cout << "Async      : " << total << " size in " << std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count() << " ms\n";

    t1 = std::chrono::high_resolution_clock::now();
    total = WorkerFutureSameThread(iterations);
    t2 = std::chrono::high_resolution_clock::now();
    std::cout << "Defer async: " << total << " size in " << std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count() << " ms\n";

    t1 = std::chrono::high_resolution_clock::now();
    total = Worker(iterations);
    t2 = std::chrono::high_resolution_clock::now();
    std::cout << "No thread  : " << total << " size in " << std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count() << " ms\n";
}

2 个答案:

答案 0 :(得分:4)

似乎问题是由动态内存管理引起的。当涉及多个线程时(即使主线程什么都不做),C ++运行时必须同步对动态内存(堆)的访问,这会产生一些开销。我用GCC做了一些实验,你的问题的解决方案是使用一些可扩展的内存分配器库。例如,当我使用tbbmalloc时,例如,

export LD_LIBRARY_PATH=$TBB_ROOT/lib/intel64/gcc4.7:$LD_LIBRARY_PATH
export LD_PRELOAD=libtbbmalloc_proxy.so.2

整个问题都消失了。

答案 1 :(得分:0)

原因很简单。你没有以平行的方式做任何事情。 当额外的线程正在做某事时,主线程什么都不做(等待线程作业完成)。

如果是线程,你有额外的事情要处理(处理线程和同步),所以你需要权衡。

要想看到任何好处,你必须同时做至少两件事。