取消Boost ASIO中的回调

时间:2015-05-02 06:16:22

标签: multithreading sockets c++11 boost boost-asio

我一直在尝试将我的代码从每个网络连接使用一个io_service转换为使用共享代码,我在服务器套接字上看到一些非常奇怪的行为(客户端似乎工作正常)。< / p>

为了试图找出正在发生的事情,我重新开始构建一个简单的例子,让我能够检查我对应该发生的一切的假设。我遇到的第一个问题是,当没有处理程序时,io_service::run没有退出,据我所知,处理程序没有从工作队列中删除。

我有一个线程执行async_accept后跟async_read。有一个单独的客户端线程(它有自己的io_service)。永远不会运行客户端线程的io_service,而服务器的线程在另一个线程中运行。

我正在使用条件变量在服务器线程中等待读取完成(这将永远不会发生,因为客户端永远不会写入)。这个时间很好,然后我打电话给socket.cancel()。我希望这会删除读处理程序并运行退出,因为工作队列现在是空的。

我确实看到读取处理程序被调用(带有取消错误),但是运行从不退出。当我将套接字生存期与处理程序生命周期联系起来时(通过lambda捕获shared_ptr到套接字),内存也不会被释放。

服务器设置如下:

std::mutex mutex;
std::unique_lock<std::mutex> lock(mutex);
std::condition_variable signal;

boost::asio::io_service server_service;
boost::asio::ip::tcp::acceptor listener(server_service);
std::mutex read_mutex;
std::unique_lock<std::mutex> read_lock(read_mutex);
std::condition_variable read_done;
std::thread server([&]() {
    std::unique_lock<std::mutex> lock(mutex);
    listener.open(boost::asio::ip::tcp::v4());
    listener.set_option(boost::asio::socket_base::enable_connection_aborted(true));
    listener.bind(boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), 4567));
    listener.listen();

    std::shared_ptr<connection> server_cnx(new connection(server_service));
    listener.async_accept(server_cnx->socket,
        [&, server_cnx](const boost::system::error_code& error) {
            log_thread() << "Server got a connection " << error << std::endl;
            boost::asio::async_read_until(server_cnx->socket, server_cnx->buffer, '\n',
                [&, server_cnx](const boost::system::error_code& error, std::size_t bytes) {
                    log_thread() << "Got " << bytes << ", " << error << std::endl;
                    std::unique_lock<std::mutex> lock(read_mutex);
                    lock.unlock();
                    read_done.notify_one();
                });
        });
    lock.unlock();
    signal.notify_one();
    if ( read_done.wait_for(read_lock, std::chrono::seconds(1)) == std::cv_status::timeout ) {
        log_thread() << "Server read timed out -- cancelling socket jobs" << std::endl;
        server_cnx->socket.cancel();
        server_cnx->socket.close();
    } else {
        log_thread() << "Server data read" << std::endl;
    }
    log_thread() << "Exiting server thread" << std::endl;
});
signal.wait(lock);
log_thread() << "Server set up" << std::endl;

io_service线程设置如下:

std::thread server_io([&]() {
    log_thread() << "About to service server IO requests" << std::endl;
    try {
        server_service.run();
    } catch ( ... ) {
        log_thread() << "Exception caught" << std::endl;
    }
    log_thread() << "**** Service jobs all run" << std::endl;
    signal.notify_one();
});

输出如下:

10.0002 139992957945728 Server set up
10.0005 139992957945728 Client set up
10.0006 139992848398080 About to service server IO requests
10.0006 139992848398080 Server got a connection system:0
11.0003 139992934819584 Server read timed out -- cancelling socket jobs
11.0004 139992934819584 Exiting server thread
11.0004 139992848398080 Got 0, system:125
20.0006 139992957945728 IO thread timed out servicing requests -- stopping it
^^^ This should not happen because the server service should have run out of work
20.0006 139992957945728 Waiting for things to close....
22.0008 139992957945728 Wait over, exiting

(列是时间+ 10s,线程ID,日志消息)

在11秒标记处,您可以看到async_read_until被调用。这是服务器io_service中的最后一个处理程序,但run未退出。

即使在等待run退出火警并且等待线程确实io_service::stop()之后,仍然run仍然没有退出(还有2秒等待)。

完整代码位于github

1 个答案:

答案 0 :(得分:1)

当服务器线程尝试解锁它不拥有的read_lock时,程序正在调用未定义的行为。

int main()
{
  ...
  std::mutex read_mutex;
  std::unique_lock<std::mutex> read_lock(read_mutex); // Acquired by main.
  std::condition_variable read_done;
  std::thread server([&]() { // Capture lock reference.
    std::unique_lock<std::mutex> lock(mutex);
    ...
    // The next line invokes undefined behavior as this thread does did
    // not acquire read_lock.mutex().
    if (read_done.wait_for(read_lock, ...)
    //                     ^^^^^^^^^ caller does not own.
    {
      ...
    }
  });
  signal.wait(lock);
  ...
}

特别是,在调用condition_variable::wait_for(lock)时,标准要求lock.owns_lock()为真,并且lock.mutex()被调用线程锁定。

混合同步和异步流程通常会增加复杂性。在这种特殊情况下,同步调用在每个层中交织在一起,使用较低级别的结构进行事件/信号通知而没有持久化状态,我认为它增加了不必要的复杂性并使流程过于复杂。此外,广泛的变量范围可能会增加复杂性。如果lambdas从未捕获read_lock,则会发生编译器错误。

在尝试观察两个事件时考虑空间分离:

// I will eventually be interested when the server starts
// accepting connections, so start setting up now.
std::mutex server_mutex;
std::unique_lock<std::mutex> server_lock(server_mutex);
std::condition_variable server_started;
std::thread server([&]()
  {
    // I will eventually be interested when the server reads
    // data, so start setting up now.
    std::mutex read_mutex;
    std::unique_lock<std::mutex> read_lock(read_mutex);
    std::condition_variable read_done;
    listener.async_accept(..., 
      [&](...)
      {
        // Got connection.
        async_read_until(...,
          [&](...)
          {
            // Someone may be interested that data has been read,
            // so use the correct mutex and condition_variable
            // pair.
            std::unique_lock<std::mutex> read_lock(read_mutex);
            read_lock.unlock();
            read_done.notify_one();
          });
      }); // async_accept
    // Someone may be interested that I am accepting connections,
    // so use the correct mutex and condition_variable pair.
    std::unique_lock<std::mutex> server_lock(server_mutex);
    server_lock.unlock();
    server_done.notify_one();

    // I am now interested in if data has been read.
    read_done.wait_for(read_lock);
  }); // server thread
// I am now interested in if the server has started.
server_started.wait(server_lock);

调用者必须准备处理事件,启动操作,然后等待事件,并且操作必须知道调用者感兴趣的事件。为了使情况恶化,现在必须考虑锁定顺序以防止死锁。请注意,在上面的示例中,服务器线程如何获取read_mutex,然后获取server_mutex。另一个线程无法在不引入死锁的情况下以差异顺序获取互斥锁。就复杂性而言,这种方法与事件数量的关系很差。

重新检查程序的流程和控制结构可能值得考虑。如果它可以写成主要是异步的,那么回调链,连续符或信号和插槽系统(Boost.Signals)可能会使解决方案复杂化。如果一个人喜欢将异步代码读取为同步,那么Boost.Asio对coroutines的支持可以提供一个干净的解决方案。最后,如果需要同步等待异步操作的结果或超时,请考虑使用Boost.Asio&#39; support for std::future或直接使用它们。

// Use an asynchronous operation so that it can be cancelled on timeout.
std::future<std::size_t> on_read = boost::asio::async_read_until(
    socket, buffer, '\n',boost::asio::use_future);

// If timeout occurs, then cancel the operation.
if (on_read.wait_for(std::chrono::seconds(1)) == std::future_status::timeout)
{
  socket.cancel();
}
// Otherwise, the operation completed (with success or error).
else
{
  // If the operation failed, then on_read.get() will throw a
  // boost::system::system_error.
  auto bytes_transferred = on_read.get();
}

虽然我强烈主张重新审视整体控制结构并减少变量范围,但以下示例与上述示例大致相同,但使用std::future可能稍微容易维护:

// I will eventually be interested when the server starts
// accepting connections, so start setting up now.
std::promise<void> server_started_promise;
auto server_started = server_started_promise.get_future();
std::thread server([&]()
  {
    // I will eventually be interested when the server reads
    // data, so start setting up now.
    std::promise<void> read_done_promise;
    auto read_done = read_done_promise.get_future();
    listener.async_accept(..., 
      [&](...)
      {
        // Got connection.
        async_read_until(...,
          [&](...)
          {
            // Someone may be interested that data has been read.
            read_done_promise.set_value();
          });
      }); // async_accept
    // Someone may be interested that I am accepting connections.
    server_started_promise.set_value();

    // I am now interested in if data has been read.
    read_done.wait_for(...);
  }); // server thread
// I am now interested in if the server has started.
server_started.wait();

这是一个基于原始代码的完整示例,demonstrates使用std::future以同步方式控制流和超时异步操作:

#include <future>
#include <iostream>
#include <thread>
#include <boost/asio.hpp>
#include <boost/asio/use_future.hpp>
#include <boost/optional.hpp>
#include <boost/utility/in_place_factory.hpp>

int main()
{
  using boost::asio::ip::tcp;

  // Setup server thread.
  boost::asio::io_service server_io_service;
  std::promise<tcp::endpoint> server_promise;
  auto server_future = server_promise.get_future();

  // Start server thread.
  std::thread server_thread(
    [&server_io_service, &server_promise]
    {
      tcp::acceptor acceptor(server_io_service);
      acceptor.open(tcp::v4());
      acceptor.set_option(
        boost::asio::socket_base::enable_connection_aborted(true));
      acceptor.bind(tcp::endpoint(tcp::v4(), 0));
      acceptor.listen();

      // Handlers will not chain work, so control the io_service with a work
      // object.
      boost::optional<boost::asio::io_service::work> work(
        boost::in_place(std::ref(server_io_service)));

      // Accept a connection.
      tcp::socket server_socket(server_io_service);
      auto on_accept = acceptor.async_accept(server_socket,
                                             boost::asio::use_future);

      // Server has started, so notify caller.
      server_promise.set_value(acceptor.local_endpoint());

      // Wait for connection or error.
      boost::system::system_error error =
        make_error_code(boost::system::errc::success);
      try
      {
        on_accept.get();
      }
      catch (const boost::system::system_error& e)
      {
        error = e;
      }
      std::cout << "Server got a connection " << error.code() << std::endl;

      // Read from connection.
      boost::asio::streambuf buffer;
      auto on_read = boost::asio::async_read_until(
          server_socket, buffer, '\n', boost::asio::use_future);

      // The async_read operation is work, so destroy the work object allowing
      // run() to exit.
      work = boost::none;

      // Timeout the async read operation.
      if (on_read.wait_for(std::chrono::seconds(1)) ==
            std::future_status::timeout)
      {
        std::cout << "Server read timed out -- cancelling socket jobs"
                  << std::endl;
        server_socket.close();
      }
      else
      {
        error = make_error_code(boost::system::errc::success);
        std::size_t bytes_transferred = 0;
        try
        {
          bytes_transferred = on_read.get();
        }
        catch (const boost::system::system_error& e)
        {
          error = e;
        }
        std::cout << "Got " << bytes_transferred << ", " 
                  << error.code() << std::endl;
      }
      std::cout << "Exiting server thread" << std::endl;
    });

  // Wait for server to start accepting connections.
  auto server_endpoint = server_future.get();
  std::cout << "Server set up" << std::endl;

  // Client thread.
  std::promise<void> promise;
  auto future = promise.get_future();
  std::thread client_thread(
    [&server_endpoint, &promise]
    {
      boost::asio::io_service io_service;
      tcp::socket client_socket(io_service);
      boost::system::error_code error;
      client_socket.connect(server_endpoint, error);
      std::cout << "Connected " << error << std::endl;
      promise.set_value();
      // Keep client socket alive, allowing server to timeout.
      std::this_thread::sleep_for(std::chrono::seconds(2));
      std::cout << "Exiting client thread" << std::endl;
    });
  // Wait for client to connect.
  future.get();
  std::cout << "Client set up" << std::endl;

  // Reset generic promise and future.
  promise = std::promise<void>();
  future = promise.get_future();

  // Run server's io_service.
  std::thread server_io_thread(
    [&server_io_service, &promise]
    {
      std::cout << "About to service server IO requests" << std::endl;
      try
      {
        server_io_service.run();
      }
      catch (const std::exception& e)
      {
        std::cout << "Exception caught: " << e.what() << std::endl;
      }
      std::cout << "Service jobs all run" << std::endl;
      promise.set_value();
    });

  if (future.wait_for(std::chrono::seconds(3)) ==
        std::future_status::timeout)
  {
    std::cout << "IO thread timed out servicing requests -- stopping it" 
              << std::endl;
    server_io_service.stop();
  }

  // Join all threads.
  server_io_thread.join();
  server_thread.join();
  client_thread.join();
}