我需要一些关于如何编写异步TCP Asio客户端以从远程服务器检索数据的指导。这是我需要实现的专有文件检索协议。在这种情况下,文件服务器是我无法控制的。服务器等待启动请求数据结构(如下所示)发送到其端口1080.理想情况下,异步回显客户端与我的需求最相似(除了代替我发送到交换变量大小的数据结构并接收类似数据的字符串结构(具有固定的公共头,如后所述)。不幸的是,增强asio示例,不包括这样的例子。
一旦服务器收到并验证了启动数据请求数据结构,它就会发送一个响应数据结构,客户端在进行下载握手循环之前需要检查该结构(除了这个协议之外,还有tftp协议的内容)通过TCP)。
如果收到此响应数据结构并且有效,则客户端将移至下一个状态 - 数据传输循环 - 它发送数据块请求。
我试图在async read asio example基础上加入一些基础知识,但这是基于流的,我真的在寻找更多的tcp数据报,我不能发出下一个握手请求,直到已收到完整的数据结构。每个数据结构都有一个类似的头结构 - 一个消息标识符和一个长度字段,然后是与每个不同消息相关的可变数据。
我确定这不是一种特别独特的基于TCP的协议类型,但除非我同步执行,否则我找不到任何类似的示例(如果可能的话我不想这样做。)
不幸的是,我没有在这个客户端代码的内容上取得更多进展,因为我对如何异步读取固定大小的TCP数据块然后以通常的ASIO回调方式发送新请求块有点困惑。
我认为主要问题是async_read调用。它有一个1024字节的缓冲区,它试图将服务器数据读入,响应数据包的长度是20个字节,所以它挂起等待读取。如果我将async_read更改为期望20个字节,则会立即返回来自服务器的预期数据。有没有一种特殊的方式我应该这样做。理想情况下,我会从服务器获取长度字段,为其分配足够的数据,然后读取其后面的可变长度数据,然后创建然后构造数据结构。这是我应该使用的方法吗?
下面的数据结构是启动协议的初始请求(服务器正在等待的协议)。所有其他数据结构与此类似(具有相同的初始5个字段,2个uint16_t和4个uint32_ts),每个数据包包含其后的数据,具体取决于 mPacketId 字段。
/**
*/
struct ConnectRequest {
// fields
uint16_t mMessageId;
uint16_t mByteCount;
uint32_t mPacketId;
uint32_t mReserved;
uint32_t mCRC;
//! move only message.
ConnectRequest() = delete;
//! delete the copy constructor.
ConnectRequest(const ConnectRequest&) = delete;
//! move constructor.
ConnectRequest(ConnectRequest&&) = default;
//! move assignment operator.
ConnectRequest& operator=(ConnectRequest&&) = default;
//! not assignable via lValue.
ConnectRequest& operator=(const ConnectRequest&) = delete;
/**
* Constructor.
*
* @param rPacketId [in] packet ID.
*/
explicit ConnectRequest(
const uint32_t& rPacketId)
: mMessageId(static_cast<uint16_t>(MessageType::ConnectRequest))
, mByteCount(sizeof(this))
, mPacketId(rPacketId)
, mReserved(0)
, mCRC(0)
{}
/**
* Construct message from raw memory buffer - no need for
* serialization interface.<p>
*
* @param pData [in] raw buffer pointer to datagram data in big
* endian format.
* @param rDataLength
* [in] datagram length - must be
* sizeof(ConnectRequest).
*
* @exception thrown if a bad argument is passed to the
* constructor.
*/
explicit ConnectRequest(
const uint8_t* pData,
const size_t& rDataLength)
{
if (pData && rDataLength == sizeof(*this)) {
UtlSafeBuffer safeBuffer(UtlSafeBuffer::ByteOrder::BigEndian);
safeBuffer.write(pData, rDataLength);
safeBuffer.setPosition(0, UtlSafeBuffer::OffsetMode::START);
safeBuffer.read(mMessageId);
safeBuffer.read(mByteCount);
safeBuffer.read(mPacketId);
safeBuffer.read(mReserved);
safeBuffer.read(mCRC);
} else {
throw std::invalid_argument(
"invalid buffer size:" + rDataLength);
}
}
/**
* Equality comparison - used to keep the log lean and only show
* changes, these are POD structs, so memcmp should work fine.
*
* @param rhs [in] ConnectRequest message.
*
* @return true if this messages are the same.
*
*/
inline bool operator==(const ConnectRequest& rhs) const {
return memcmp(this, &rhs, sizeof(ConnectRequest)) == 0;
}
//! returns the negated version of operator==(rhs).
inline bool operator!=(const ConnectRequest& rhs) const {
return !operator==(rhs);
}
/**
* Stream insert operator.<p>
*
* @param os [in,out] output stream.
* @param rhs [in] ConnectRequest to send to the output
* stream.
*
* @return a reference to the updated stream.
*/
friend std::ostream& operator<<(
std::ostream& os, const ConnectRequest& rhs) {
os << "ConnectRequest: "
<< "mMessageId[" << rhs.mMessageId
<< "], mByteCount[" << rhs.mByteCount
<< "], mPacketId[" << rhs.mPacketId
<< "], mCRC[" << rhs.mCRC
<< "]";
return os;
}
};
boost中修改后的代码示例如下:请注意,我更改了 input_buffer _ 字段,因为原始字段使用 boost :: asio :: streambuf input_buffer _ 。我改变这个主要是因为我不知道如何使用它来读取固定大小的数据结构,我认为使用固定的1024字节数组就足够了。
如果有人能够指出我正在尝试解决问题的类似示例的方向,或者告诉我如何让流程控制继续进行,我将非常感激。
抱怨被困在基本阶段,但我是新手来提升asio - 特别是对于异步tcp。
关于修改示例的另一件事,我有一个固定的IP地址和端口,该示例使用以下方法来启动运行循环。你可以看到我有一个注释掉的代码行,这是整个解析器/查询复杂混乱的结果。
try {
boost::asio::io_service io_service;
boost::asio::ip::tcp::endpoint /** endpoint(boost::asio::ip::address::from_string("192.168.100.10"), 1080); */
boost::asio::ip::tcp::resolver r(io_service);
FHDBUtility c(io_service);
c.start(r.resolve(boost::asio::ip::tcp::resolver::query(argv[1], argv[2])));
io_service.run();
} catch (std::exception& e) {
std::cerr << "Error: " << e.what() << std::endl;
}
我修复以下代码来添加第一个预期返回数据结构的大小)至少是交换请求和响应。
#pragma once
// SYSTEM INCLUDES
#include <memory>
#include <iostream>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/asio/deadline_timer.hpp>
// APPLICATION INCLUDES
#include "fhdb/ConnectRequest.h"
#include "fhdb/ConnectResponse.h"
// DEFINES
// MACROS
// EXTERNAL FUNCTIONS
// EXTERNAL VARIABLES
// CONSTANTS
// STRUCTS
// FORWARD DECLARATIONS
class FHDBUtility {
public:
FHDBUtility(boost::asio::io_service& io_service)
: stopped_(false)
, socket_(io_service)
, input_buffer_(std::make_unique<uint8_t[]>(1024))
, deadline_(io_service)
, heartbeat_timer_(io_service)
{}
// Called by the user of the FHDBUtility class to initiate the connection process.
// The endpoint iterator will have been obtained using a tcp::resolver.
void start(boost::asio::ip::tcp::resolver::iterator endpoint_iter) {
// Start the connect actor.
start_connect(endpoint_iter);
// Start the deadline actor. You will note that we're not setting any
// particular deadline here. Instead, the connect and input actors will
// update the deadline prior to each asynchronous operation.
deadline_.async_wait(boost::bind(&FHDBUtility::check_deadline, this));
}
// This function terminates all the actors to shut down the connection. It
// may be called by the user of the FHDBUtility class, or by the class itself in
// response to graceful termination or an unrecoverable error.
void stop()
{
stopped_ = true;
socket_.close();
deadline_.cancel();
heartbeat_timer_.cancel();
}
private:
void start_connect(boost::asio::ip::tcp::resolver::iterator endpoint_iter)
{
if (endpoint_iter != boost::asio::ip::tcp::resolver::iterator())
{
std::cout << "Trying " << endpoint_iter->endpoint() << "...\n";
// Set a deadline for the connect operation.
deadline_.expires_from_now(boost::posix_time::seconds(60));
// Start the asynchronous connect operation.
socket_.async_connect(endpoint_iter->endpoint(),
boost::bind(&FHDBUtility::handle_connect,
this, _1, endpoint_iter));
} else {
// There are no more endpoints to try. Shut down the FHDBUtility.
stop();
}
}
void handle_connect(const boost::system::error_code& ec,
boost::asio::ip::tcp::resolver::iterator endpoint_iter)
{
if (stopped_) {
return;
}
// The async_connect() function automatically opens the socket at the start
// of the asynchronous operation. If the socket is closed at this time then
// the timeout handler must have run first.
if (!socket_.is_open()) {
std::cout << "Connect timed out\n";
// Try the next available endpoint.
start_connect(++endpoint_iter);
} else if (ec) {
// Check if the connect operation failed before the deadline expired.
std::cout << "Connect error: " << ec.message() << "\n";
// We need to close the socket used in the previous connection attempt
// before starting a new one.
socket_.close();
// Try the next available endpoint.
start_connect(++endpoint_iter);
} else { // Otherwise we have successfully established a connection.
std::cout << "Connected to " << endpoint_iter->endpoint() << "\n";
// send async connect request
sendConnectRequest();
}
}
void start_read() {
// Set a deadline for the read operation.
deadline_.expires_from_now(boost::posix_time::seconds(30));
boost::asio::async_read(socket_,
boost::asio::buffer(input_buffer_.get(), sizeof(ConnectResponse)),
boost::bind(&FHDBUtility::handle_read, this, _1, _2, input_buffer_.get()));
}
void handle_read(
const boost::system::error_code& ec,
std::size_t bytes_transferred,
const uint8_t* pReceiveBuffer)
{
if (stopped_) {
return;
}
if (!ec) {
if (bytes_transferred == sizeof(ConnectResponse)) {
auto connectResponse = std::make_unique<ConnectResponse>(
pReceiveBuffer, bytes_transferred);
}
start_read();
} else {
std::cout << "Error on receive: " << ec.message() << "\n";
stop();
}
}
void sendConnectRequest() {
if (stopped_) {
return;
}
// make initial connect request using PacketId set to 1
const auto connectRequest = std::make_shared<ConnectRequest>(1);
// send the connect request packet to the CMC to initiate the protocol
boost::asio::async_write(socket_,
boost::asio::buffer(connectRequest.get(), sizeof(ConnectRequest)),
boost::bind(&FHDBUtility::handle_write, this, _1, connectRequest->mPacketId));
}
void handle_write(const boost::system::error_code& ec, const uint32_t packetId) {
if (stopped_) {
return;
}
if (!ec) {
// Wait 10 seconds before sending the next heartbeat.
heartbeat_timer_.expires_from_now(boost::posix_time::seconds(10));
start_read();
switch (packetId) {
}
//heartbeat_timer_.async_wait(boost::bind(&FHDBUtility::start_write, this));
} else {
std::cout << "Error on heartbeat: " << ec.message() << "\n";
stop();
}
}
void check_deadline() {
if (stopped_) {
return;
}
// Check whether the deadline has passed. We compare the deadline against
// the current time since a new asynchronous operation may have moved the
// deadline before this actor had a chance to run.
if (deadline_.expires_at() <= boost::asio::deadline_timer::traits_type::now()) {
// The deadline has passed. The socket is closed so that any outstanding
// asynchronous operations are cancelled.
socket_.close();
// There is no longer an active deadline. The expiry is set to positive
// infinity so that the actor takes no action until a new deadline is set.
deadline_.expires_at(boost::posix_time::pos_infin);
}
// Put the actor back to sleep.
deadline_.async_wait(boost::bind(&FHDBUtility::check_deadline, this));
}
private:
bool stopped_;
boost::asio::ip::tcp::socket socket_;
std::unique_ptr<uint8_t[]> input_buffer_;
boost::asio::deadline_timer deadline_;
boost::asio::deadline_timer heartbeat_timer_;
};