我正在研究客户端服务器架构。我将大约8000000字节的数据从客户端发送到服务器。我有点惊讶地知道,我的客户端可以在704毫秒内发送数据,但是服务器需要3922毫秒来接收这些数据。
虽然我没有在服务器端对我的数据进行任何操作,但只是简单地接收它。客户端和服务器硬件架构也一样。我使用WIRESHARK检查了数据流,但是客户端和服务器时间之间的差异似乎是~6 times
?
注意:我使用std::Clock()
来测量客户端和服务器的执行时间,使用以太网连接将数据传输到服务器。
统计:
//客户端代码
std::clock_t c_start = std::clock();
for( int i = 0; i <100000; i++)// writing data to buffer
{
m_vector.push_back(i);
}
uint32_t siz = (m_vector.size())*sizeof(double);
int total_bytes = 0;
int count=0;
for(int j=0; j<1000; j++)
{
bytesSent = send(ConnectSocket,(char*)&siz, 4, 0);
assert (bytesSent == sizeof (uint32_t));
std::cout<<"length information is in:"<<bytesSent<<"bytes"<<std::endl;
bytesSent = send(ConnectSocket,(char*)m_vector.data(), siz, 0);
total_bytes = total_bytes+bytesSent;
}
closesocket (ConnectSocket);
std::clock_t c_end = std::clock();
std::cout << "CPU time used: "<< 1000.0 * (c_end-c_start) / CLOCKS_PER_SEC<< " ms\n";
WSACleanup();
system("pause");
return 0;
}
//服务器代码
while(1)
{
//code to received data length from the client
int length_received = recv(m_socket,(char*)&nlength, 4, 0);
m_vector.resize(nlength/sizeof(double));
//code to received data length from the client
bytesRecv = recv(m_socket,(char*)m_vector.data(), nlength, 0);
count++;
if((count==1))
{
std::clock_t c_start = std::clock();
}
//1st time data
if((bytesRecv > 0 ))
{
total_br = total_br + bytesRecv;
v1=m_vector;
cout<<"Server: Received bytes are"<<total_br<<std::endl;
}else {break;}
}
closesocket (m_socket);
std::clock_t c_end = std::clock();
std::cout << "CPU time used: "<< 1000.0 * (c_end-c_start) / CLOCKS_PER_SEC<< " ms\n";
WSACleanup();
system("pause");
return 0;
}
答案 0 :(得分:0)
在TCP中发送只是意味着将数据写入本地套接字发送缓冲区。将TCP段实际放入网络是异步的。这意味着即使在关闭套接字后,并未实际传输所有数据。因此,进行发送方时间测量基本上没有意义。
接受3922毫秒的转移并没什么了不起的。