在我的服务器应用程序的网络协议中,我发送的每个缓冲区的开头都有2个字节,包含长度。并且TcpReadCallback方法读取BeginReceive写入的字节,直到读取完整的缓冲区。然而,使用我的实现,数据包似乎丢失,有时空缓冲区被发送到ProcessReceiveBuffer,尤其是当我发送5ms以下的消息时,以及通过互联网发送时更常见(将发生在10或所以秒发送半秒钟)。有谁能在这里发现我做错了什么?
谢谢!
[17:29:49]: received: 1349, 514
[17:29:50]: received: 1350, 514
[17:29:50]: received: 1351, 514
[17:29:51]: received: 1352, 514
[17:29:51]: received: 1353, 514
[17:29:52]: received: 1355, 514
[17:29:52]: Skipped! expected 1354, got 1355
[17:29:53]: received: 1357, 514
[17:29:53]: Skipped! expected 1356, got 1357
[17:29:54]: received: 1359, 514
[17:29:54]: Skipped! expected 1358, got 1359
[17:29:56]: received: 1362, 514
[17:29:56]: Skipped! expected 1360, got 1362
[17:29:56]: received: 1363, 514
[17:29:57]: received: 1364, 514
示例:
"收到"之后的第一个号码是客户发送的订单,其次是尺寸。当通过互联网发送时,一些数据包经常以突发方式丢弃在TCP中。
print(Unmanaged.passUnretained(someVar).toOpaque())
答案 0 :(得分:0)
由于网络拥塞,EndReceive
可能无法一次性收到完整的消息。因此TcpReadCallback
应重复读取操作,直到不再接收到字节为止。
以下是相关的MSDN example,其中说明了这一点:
public static void Read_Callback(IAsyncResult ar){
StateObject so = (StateObject) ar.AsyncState;
Socket s = so.workSocket;
int read = s.EndReceive(ar);
if (read > 0) {
so.sb.Append(Encoding.ASCII.GetString(so.buffer, 0, read));
s.BeginReceive(so.buffer, 0, StateObject.BUFFER_SIZE, 0,
new AsyncCallback(Async_Send_Receive.Read_Callback), so);
}
else{
if (so.sb.Length > 1) {
//All of the data has been read, so displays it to the console
string strContent;
strContent = so.sb.ToString();
Console.WriteLine(String.Format("Read {0} byte from socket" +
"data = {1} ", strContent.Length, strContent));
}
s.Close();
}
}
在related post中讨论了对操作系统级别的基本tcp socket
send()
和recv()
调用中的类似效果。
答案 1 :(得分:0)