可能的std :: async实现错误Windows

时间:2018-06-17 17:32:06

标签: c++ concurrency stl cntk

似乎std :: async的windows实现中存在一个错误。在负载很重的情况下(每秒启动异步的1000个线程),异步任务永远不会被安排,等待返回的期货会导致死锁。看到这段代码(使用推迟策略延迟而不是异步修改):

BundlingChunk(size_t numberOfInputs, Bundler* parent, ChunkIdType chunkId)
        : m_numberOfInputs(numberOfInputs), m_parent(parent), m_chunkId(chunkId)
    {
        const BundlerChunkDescription& chunk = m_parent->m_chunks[m_chunkId];
        const ChunkInfo& original = chunk.m_original;
        auto& deserializers = m_parent->m_deserializers;

        // Fetch all chunks in parallel.
        std::vector<std::map<ChunkIdType, std::shared_future<ChunkPtr>>> chunks;
        chunks.resize(chunk.m_secondaryChunks.size());
        static std::atomic<unsigned long long int> chunksInProgress = 0;

        for (size_t i = 0; i < chunk.m_secondaryChunks.size(); ++i)
        {
            for (const auto& c : chunk.m_secondaryChunks[i])
            {
                const auto chunkCreationLambda = ([this, c, i] {
                    chunksInProgress++;
                    ChunkPtr chunk = m_parent->m_weakChunkTable[i][c].lock();
                    if (chunk) {
                        chunksInProgress--;
                        return chunk;
                    }
                    chunksInProgress--;
                    return m_parent->m_deserializers[i]->GetChunk(c);
                });
                std::future<ChunkPtr> chunkCreateFuture = std::async(std::launch::deferred, chunkCreationLambda);
                chunks[i].emplace(c, chunkCreateFuture.share());
            }
        }

        std::vector<SequenceInfo> sequences;
        sequences.reserve(original.m_numberOfSequences);

        // Creating chunk mapping.
        m_parent->m_primaryDeserializer->SequenceInfosForChunk(original.m_id, sequences);
        ChunkPtr drivingChunk = chunks.front().find(original.m_id)->second.get();
        m_sequenceToSequence.resize(deserializers.size() * sequences.size());
        m_innerChunks.resize(deserializers.size() * sequences.size());
        for (size_t sequenceIndex = 0; sequenceIndex < sequences.size(); ++sequenceIndex)
        {
            if (chunk.m_invalid.find(sequenceIndex) != chunk.m_invalid.end())
            {
                continue;
            }

            size_t currentIndex = sequenceIndex * deserializers.size();
            m_sequenceToSequence[currentIndex] = sequences[sequenceIndex].m_indexInChunk;
            m_innerChunks[currentIndex] = drivingChunk;
        }

        // Creating sequence mapping and requiring underlying chunks.
        SequenceInfo s;
        for (size_t deserializerIndex = 1; deserializerIndex < deserializers.size(); ++deserializerIndex)
        {
            auto& chunkTable = m_parent->m_weakChunkTable[deserializerIndex];
            for (size_t sequenceIndex = 0; sequenceIndex < sequences.size(); ++sequenceIndex)
            {
                if (chunk.m_invalid.find(sequenceIndex) != chunk.m_invalid.end())
                {
                    continue;
                }

                size_t currentIndex = sequenceIndex * deserializers.size() + deserializerIndex;
                bool exists = deserializers[deserializerIndex]->GetSequenceInfo(sequences[sequenceIndex], s);
                if (!exists)
                {
                    if(m_parent->m_verbosity >= (int)TraceLevel::Warning)
                        fprintf(stderr, "Warning: sequence '%s' could not be found in the deserializer responsible for stream '%ls'\n",
                            m_parent->m_corpus->IdToKey(sequences[sequenceIndex].m_key.m_sequence).c_str(),
                            deserializers[deserializerIndex]->StreamInfos().front().m_name.c_str());
                    m_sequenceToSequence[currentIndex] = SIZE_MAX;
                    continue;
                }

                m_sequenceToSequence[currentIndex] = s.m_indexInChunk;
                ChunkPtr secondaryChunk = chunkTable[s.m_chunkId].lock();
                if (!secondaryChunk)
                {
                    secondaryChunk = chunks[deserializerIndex].find(s.m_chunkId)->second.get();
                    chunkTable[s.m_chunkId] = secondaryChunk;
                }

                m_innerChunks[currentIndex] = secondaryChunk;
            }
        }
    }

我的上述版本已修改,以便异步任务以延迟而非异步方式启动,从而解决了问题。从VS2017可再发行的14.12.25810开始有没有其他人看过这样的东西?重现这个问题就像培训CNTK模型一样简单,该模型在具有GPU和SSD的机器上使用文本和图像阅读器,因此CPU反序列化成为瓶颈。经过约30分钟的训练后,通常会发生僵局。有没有人在Linux上看到类似的问题?如果是这样,它可能是代码中的错误,虽然我对此表示怀疑,因为调试计数器chunksInProgress在死锁后始终为0。作为参考,整个源文件位于https://github.com/Microsoft/CNTK/blob/455aef80eeff675c0f85c6e34a03cb73a4693bff/Source/Readers/ReaderLib/Bundler.cpp

2 个答案:

答案 0 :(得分:2)

新的一天,更好的回答(很多更好)。请继续阅读。

我花了一些时间在Windows上调查std::async的行为,你是对的。这是一种不同的动物,请参阅here

因此,如果您的代码依赖于std::async 总是启动新的执行线程并立即返回,那么您就无法使用它。反正不是在Windows上。在我的机器上,限制似乎是768个背景线程,它们或多或少地适合你观察到的内容。

无论如何,我想学习更多关于现代C ++的知识,所以我有一个很好的解决方案,我可以在Windows上使用自己的替换std::async,并且OP语义受损。因此,我谦卑地提出以下内容:

AsyncTask:替代std::async

#include <future>
#include <thread>

template <class Func, class... Args>
    std::future <std::result_of_t <std::decay_t <Func> (std::decay_t <Args>...)>>
        AsyncTask (Func&& f, Args&&... args)
{
    using decay_func = std::decay_t <Func>;
    using return_type = std::result_of_t <decay_func (std::decay_t <Args>...)>;

    std::packaged_task <return_type (decay_func f, std::decay_t <Args>... args)>
        task ([] (decay_func f, std::decay_t <Args>... args)
    {
        return f (args...);
    });

    auto task_future = task.get_future ();
    std::thread t (std::move (task), f, std::forward <Args> (args)...);
    t.detach ();
    return task_future;
};

测试计划

#include <iostream>
#include <string>

int add_two_integers (int a, int b)
{
    return a + b;
}

std::string append_to_string (const std::string& s)
{
    return s + " addendum";
}

int main ()
{
    auto /* i.e. std::future <int> */ f1 = AsyncTask (add_two_integers , 1, 2);
    auto /* i.e. int */  i = f1.get ();
    std::cout << "add_two_integers : " << i << std::endl;

    auto  /* i.e. std::future <std::string> */ f2 = AsyncTask (append_to_string , "Hello world");
    auto /* i.e. std::string */ s = f2.get ();        std::cout << "append_to_string : " << s << std::endl;
    return 0;  
}

<强>输出

add_two_integers : 3
append_to_string : Hello world addendum

现场演示here(gcc)和here(铿锵)。

我从写这篇文章中学到了很多东西,这很有趣。我对这些东西很新,所以所有评论都欢迎。如果我有任何错误,我会很乐意更新这篇文章。

答案 1 :(得分:0)

Paul Sander's answer的启发,我试图简化他的代码:

#include <functional>
#include <future>
#include <thread>
#include <type_traits>

template <class Func, class... Args>
[[nodiscard]] std::future<std::invoke_result_t<std::decay_t<Func>, std::decay_t<Args>...>>
RunInThread(Func&& func, Args&&... args){
  using return_type = std::invoke_result_t<std::decay_t<Func>, std::decay_t<Args>...>;

  auto bound_func = std::bind(std::forward<Func>(func), std::forward<Args>(args)...);
  std::packaged_task<return_type(void)> task(bound_func);
  auto task_future = task.get_future();
  std::thread(std::move(task)).detach();
  return task_future;
}