asio :: io_service和thread_group生命周期问题

时间:2015-12-16 05:04:31

标签: c++ boost-asio boost-thread

查看answers like this one,我们可以执行以下操作:

boost::asio::io_service ioService;
boost::thread_group threadpool;
{
    boost::asio::io_service::work work(ioService);
    threadpool.create_thread(boost::bind(&boost::asio::io_service::run, ioService));
    threadpool.create_thread(boost::bind(&boost::asio::io_service::run, &ioService));
    ioService.post(boost::bind(...));
    ioService.post(boost::bind(...));
    ioService.post(boost::bind(...));
}
threadpool.join_all();

但是,在我的情况下,我想做类似的事情:

while (condition)
{
    ioService.post(boost::bind(...));
    ioService.post(boost::bind(...));
    ioService.post(boost::bind(...));
    threadpool.join_all();

    // DO SOMETHING WITH RESULTS
}

但是,boost::asio::io_service::work work(ioService)行不合适,据我所知,我无法重新创建它而无需再次在池中创建每个线程。

在我的代码中,线程创建开销似乎可以忽略不计(并且实际上比以前的基于互斥锁的代码更好),但是有更简洁的方法吗?

1 个答案:

答案 0 :(得分:2)

while (condition)
{
    //... stuff
    threadpool.join_all();

    //... 
}

没有任何意义,因为你只能加入一次线程。一旦加入,它们便消失了。您不希望一直启动新线程(使用线程池+任务队列¹)。

由于您不想实际停止线程,因此您可能不希望破坏工作。如果你坚持,shared_ptr<work>optional<work>效果很好(仅my_work.reset()

¹更新建议:

<强>更新

“解决方案#2”的简单扩展可以让所有任务完成,而无需加入工作人员/摧毁游泳池:

  void drain() {
      unique_lock<mutex> lk(mx);
      namespace phx = boost::phoenix;
      cv.wait(lk, phx::empty(phx::ref(_queue)));
  }

请注意,为了可靠运行,还需要在出队列上发出条件变量信号:

      cv.notify_all(); // in order to signal drain

CAVEATS

  1. 这是一个邀请竞争条件的接口(队列可以接受来自多个线程的作业,因此一旦drain()返回,另一个线程就已经发布了一个新任务)

  2. 当队列为空时,不会在任务完成时发出信号。队列无法知道这一点,如果需要,请在任务中使用屏障/信号条件(在此示例中为the_work)。排队/调度的机制与此无关。

  3. 样本

    <强> Live On Coliru

    #include <boost/thread.hpp>
    #include <boost/phoenix.hpp>
    #include <boost/optional.hpp>
    
    using namespace boost;
    using namespace boost::phoenix::arg_names;
    
    class thread_pool
    {
      private:
          mutex mx;
          condition_variable cv;
    
          typedef function<void()> job_t;
          std::deque<job_t> _queue;
    
          thread_group pool;
    
          boost::atomic_bool shutdown;
          static void worker_thread(thread_pool& q)
          {
              while (auto job = q.dequeue())
                  (*job)();
          }
    
      public:
          thread_pool() : shutdown(false) {
              for (unsigned i = 0; i < boost::thread::hardware_concurrency(); ++i)
                  pool.create_thread(bind(worker_thread, ref(*this)));
          }
    
          void enqueue(job_t job) 
          {
              lock_guard<mutex> lk(mx);
              _queue.push_back(std::move(job));
    
              cv.notify_one();
          }
    
          void drain() {
              unique_lock<mutex> lk(mx);
              namespace phx = boost::phoenix;
              cv.wait(lk, phx::empty(phx::ref(_queue)));
          }
    
          optional<job_t> dequeue() 
          {
              unique_lock<mutex> lk(mx);
              namespace phx = boost::phoenix;
    
              cv.wait(lk, phx::ref(shutdown) || !phx::empty(phx::ref(_queue)));
    
              if (_queue.empty())
                  return none;
    
              auto job = std::move(_queue.front());
              _queue.pop_front();
    
              cv.notify_all(); // in order to signal drain
    
              return std::move(job);
          }
    
          ~thread_pool()
          {
              shutdown = true;
              {
                  lock_guard<mutex> lk(mx);
                  cv.notify_all();
              }
    
              pool.join_all();
          }
    };
    
    void the_work(int id)
    {
        std::cout << "worker " << id << " entered\n";
    
        // no more synchronization; the pool size determines max concurrency
        std::cout << "worker " << id << " start work\n";
        this_thread::sleep_for(chrono::milliseconds(2));
        std::cout << "worker " << id << " done\n";
    }
    
    int main()
    {
        thread_pool pool; // uses 1 thread per core
    
        for (auto i = 0ull; i < 20; ++i) {
            for (int i = 0; i < 10; ++i)
                pool.enqueue(bind(the_work, i));
    
            pool.drain(); // make the queue empty, leave the threads
            std::cout << "Queue empty\n";
        }
    
        // destructing pool joins the worker threads
    }