Boost asio,单个TCP服务器,许多客户端

时间:2017-10-25 14:25:56

标签: c++ boost tcp boost-asio

我正在创建一个TCP服务器,它将使用boost asio,它将接受来自许多客户端的连接,接收数据和发送确认。问题是我希望能够接受所有客户,但我希望一次只能使用一个。我希望所有其他事务都保留在队列中。

示例:

  1. Client1连接
  2. Client2连接
  3. Client1发送数据并要求回复
  4. Client2发送数据并要求回复
  5. Client2的请求被放入队列
  6. 读取Client1的数据,服务器回复,交易结束
  7. Client2的请求来自队列,服务器读取数据,回复交易结束。
  8. 所以这是异步服务器和阻塞服务器之间的事情。我想一次做一件事,但同时我希望能够将所有客户端套接字及其需求存储在队列中。

    我能够使用我需要的所有功能创建服务器 - 客户端通信,但仅限于单线程。客户端断开连接后,服务器也会终止。我真的不知道如何开始实现我上面提到的内容。我应该在每次接受连接时打开新线程吗?我应该使用async_accept还是阻止接受?

    我已经阅读了boost :: asio聊天示例,其中许多客户端连接了这么单一服务器,但是我没有这里需要的排队机制。

    我知道这篇文章可能有点令人困惑,但TCP服务器对我来说是新手,所以我对术语不够熟悉。还没有要发布的源代码,因为我只是要求帮助解决这个项目的概念。

2 个答案:

答案 0 :(得分:4)

继续接受。

您没有显示代码,但通常看起来像

void do_accept() {
    acceptor_.async_accept(socket_, [this](boost::system::error_code ec) {
        std::cout << "async_accept -> " << ec.message() << "\n";
        if (!ec) {
            std::make_shared<Connection>(std::move(socket_))->start();
            do_accept(); // THIS LINE
        }
    });
}

如果您不包含标记为// THIS LINE的行,则您确实不会接受超过1个连接。

如果这没有帮助,请提供一些我们可以使用的代码。

乐趣,演示

这仅使用非网络部分的标准库功能。

网络侦听器

网络部分如前所述:

#include <boost/asio.hpp>
#include <boost/asio/high_resolution_timer.hpp>
#include <istream>

using namespace std::chrono_literals;
using Clock = std::chrono::high_resolution_clock;

namespace Shared {
    using PostRequest = std::function<void(std::istream& is)>;
}

namespace Network {

    namespace ba = boost::asio;
    using ba::ip::tcp;
    using error_code = boost::system::error_code;

    using Shared::PostRequest;

    struct Connection : std::enable_shared_from_this<Connection> {
        Connection(tcp::socket&& s, PostRequest poster) : _s(std::move(s)), _poster(poster) {}

        void process() {
            auto self = shared_from_this();
            ba::async_read(_s, _request, [this,self](error_code ec, size_t) {
                if (!ec || ec == ba::error::eof) {
                    std::istream reader(&_request);
                    _poster(reader);
                }
            });
        }

      private:
        tcp::socket   _s;
        ba::streambuf _request;
        PostRequest   _poster;
    };

    struct Server {

        Server(unsigned port, PostRequest poster) : _port(port), _poster(poster) {}

        void run_for(Clock::duration d = 30s) {
            _stop.expires_from_now(d);
            _stop.async_wait([this](error_code ec) { if (!ec) _svc.post([this] { _a.close(); }); });

            _a.listen();

            do_accept();

            _svc.run();
        }
      private:
        void do_accept() {
            _a.async_accept(_s, [this](error_code ec) {
                if (!ec) {
                    std::make_shared<Connection>(std::move(_s), _poster)->process();
                    do_accept();
                }
            });
        }

        unsigned short            _port;
        PostRequest               _poster;

        ba::io_service            _svc;
        ba::high_resolution_timer _stop { _svc };
        tcp::acceptor             _a { _svc, tcp::endpoint {{}, _port } };
        tcp::socket               _s { _svc };
    };
}

唯一的&#34;连接&#34;工作服务部分是在构造时传递给服务器的PostRequest处理程序:

Network::Server server(6767, handler);

我也选择了异步操作,所以我们可以有一个计时器来停止服务,即使我们不使用任何线程:

server.run_for(3s); // this blocks

工作部分

这是完全独立的,并将使用线程。首先,让我们定义一个Request和一个线程安全Queue

namespace Service {
    struct Request {
        std::vector<char> data; // or whatever you read from the sockets...
    };

    Request parse_request(std::istream& is) {
        Request result;
        result.data.assign(std::istream_iterator<char>(is), {});
        return result;
    }

    struct Queue {
        Queue(size_t max = 50) : _max(max) {}

        void enqueue(Request req) {
            std::unique_lock<std::mutex> lk(mx);
            cv.wait(lk, [this] { return _queue.size() < _max; });
            _queue.push_back(std::move(req));

            cv.notify_one();
        }

        Request dequeue(Clock::time_point deadline) {
            Request req;

            {
                std::unique_lock<std::mutex> lk(mx);
                _peak = std::max(_peak, _queue.size());
                if (cv.wait_until(lk, deadline, [this] { return _queue.size() > 0; })) {
                    req = std::move(_queue.front());
                    _queue.pop_front();
                    cv.notify_one();
                } else {
                    throw std::range_error("dequeue deadline");
                }
            }

            return  req;
        }

        size_t peak_depth() const {
            std::lock_guard<std::mutex> lk(mx);
            return _peak;
        }

      private:
        mutable std::mutex mx;
        mutable std::condition_variable cv;

        size_t _max = 50;
        size_t _peak = 0;
        std::deque<Request> _queue;
    };

这没什么特别的,并且实际上还没有使用线程。让我们创建一个接受对队列的引用的worker函数(如果需要,可以启动多于1个worker):

    void worker(std::string name, Queue& queue, Clock::duration d = 30s) {
        auto const deadline = Clock::now() + d;

        while(true) try {
            auto r = queue.dequeue(deadline);
            (std::cout << "Worker " << name << " handling request '").write(r.data.data(), r.data.size()) << "'\n";
        }
        catch(std::exception const& e) {
            std::cout << "Worker " << name << " got " << e.what() << "\n";
            break;
        }
    }
}

main驱动程序

这里是Queue实例化的地方,网络服务器以及一些工作线程都已启动:

int main() {
    Service::Queue queue;

    auto handler = [&](std::istream& is) {
            queue.enqueue(Service::parse_request(is));
        };

    Network::Server server(6767, handler);

    std::vector<std::thread> pool;
    pool.emplace_back([&queue] { Service::worker("one", queue, 6s); });
    pool.emplace_back([&queue] { Service::worker("two", queue, 6s); });

    server.run_for(3s); // this blocks

    for (auto& thread : pool)
        if (thread.joinable())
            thread.join();

    std::cout << "Maximum queue depth was " << queue.peak_depth() << "\n";
}

现场演示

<强> See It Live On Coliru

测试负载如下所示:

for a in "hello world" "the quick" "brown fox" "jumped over" "the pangram" "bye world"
do
     netcat 127.0.0.1 6767 <<< "$a" || echo "not sent: '$a'"&
done
wait

它打印的内容如下:

Worker one handling request 'brownfox'
Worker one handling request 'thepangram'
Worker one handling request 'jumpedover'
Worker two handling request 'Worker helloworldone handling request 'byeworld'
Worker one handling request 'thequick'
'
Worker one got dequeue deadline
Worker two got dequeue deadline
Maximum queue depth was 6

答案 1 :(得分:0)

您需要的包含。有些可能是不必要的:

boost/asio.hppboost/thread.hppboost/asio/io_service.hpp

boost/asio/spawn.hppboost/asio/write.hppboost/asio/buffer.hpp

boost/asio/ip/tcp.hppiostreamstdlib.harraystring

vectorstring.hstdio.hprocess.hiterator

using namespace boost::asio;
using namespace boost::asio::ip;

io_service ioservice;

tcp::endpoint sim_endpoint{ tcp::v4(), 4066 };              //{which connectiontype, portnumber}
tcp::acceptor sim_acceptor{ ioservice, sim_endpoint };
std::vector<tcp::socket> sim_sockets;

static int iErgebnis;
int iSocket = 0;


void do_write(int a)                                        //int a is the postion of the socket in the vector
{
    int iWSchleife = 1;                                     //to stay connected with putty or something
    static char chData[32000];
    std::string sBuf = "Received!\r\n";

    while (iWSchleife > 0)          
    {
        boost::system::error_code error;
        memset(chData, 0, sizeof(chData));        //clear the char 

        iErgebnis = sim_sockets[a].read_some(boost::asio::buffer(chData), error);           //recv data from client
        iWSchleife = iErgebnis;                                                             //if iErgebnis is bigger then 0 it will stay in the loop. iErgebniss is always >0 when data is received

        if (iErgebnis > 0) {
            printf("%d data received from client : \n%s\n\n", iErgebnis, chData);
            write(sim_sockets[a], boost::asio::buffer(sBuf), error);  //send data to client
        }
        else {
            boost::system::error_code ec;
            sim_sockets[a].shutdown(boost::asio::ip::tcp::socket::shutdown_send, ec);       //close the socket when no data
            if (ec)
            {
                printf("studown error");                                                    // An error occurred.
            }
        }
    }
}

void do_accept(yield_context yield)
{
    while (1)                                                   //endless loop to accept limitless clients
    {
        sim_sockets.emplace_back(ioservice);                    //look to the link below for more info
        sim_acceptor.async_accept(sim_sockets.back(), yield);   //waits here to accept an client

        boost::thread dosome(do_write, iSocket);                //when accepted, starts the thread do_write and passes the parameter iSocket
        iSocket++;                                              //to know the position of the socket in the vector

    }
}

int main()
{
    sim_acceptor.listen();
    spawn(ioservice, do_accept);            //here you can learn more about Coroutines https://theboostcpplibraries.com/boost.coroutine
    ioservice.run();                        //from here you jump to do:accept
    getchar(); 
}