我已经设置了两个Raspberry Pis来使用UDP套接字,一个作为客户端,一个作为服务器。内核已经使用RT-PREEMPT(4.9.43-rt30 +)进行了修补。客户端充当服务器的回显,以允许计算往返延迟(RTL)。目前,在服务器端使用10Hz的发送频率,具有2个线程:一个用于向客户端发送消息,一个用于从客户端接收消息。使用循环调度将线程设置为具有95的调度优先级。
服务器构造一条消息,其中包含消息发送的时间以及消息开始发送后的时间。此消息从服务器发送到客户端,然后立即返回到服务器。从客户端收到消息后,服务器计算往返延迟,然后将其存储在.txt文件中,用于使用Python进行绘图。
问题在于,在分析图表时,我注意到RTL中存在周期性峰值。图片的顶部图形:RTL latency and sendto() + recvfrom() times.在图例中,我使用的是RTT而不是RTL。这些峰值与服务器端sendto()和recvfrom()调用中显示的峰值直接相关。关于如何删除这些峰值的任何建议,因为我的应用程序非常依赖于一致性?
我尝试并注意到的事情:
我绝不是套接字/ C ++编程/ Linux方面的专家,所以任何给出的建议都会因为我的想法而受到高度赞赏。下面是用于创建套接字并启动服务器线程以发送和接收消息的代码。下面是从服务器发送消息的代码,如果你需要其余的请告诉我,但是现在我的担心集中在sendto()函数引起的延迟。如果您还有其他需要请告诉我。感谢。
thread_priority = priority;
recv_buff = recv_buff_len;
std::cout << del << " Second start-up delay..." << std::endl;
sleep(del);
std::cout << "Delay complete..." << std::endl;
master = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
// master socket creation
if(master == 0){// Try to create the UDP socket
perror("Could not create the socket: ");
exit(EXIT_FAILURE);
}
std::cout << "Master Socket Created..." << std::endl;
std::cout << "Adjusting send and receive buffers..." << std::endl;
setBuff();
// Server address and port creation
serv.sin_family = AF_INET;// Address family
serv.sin_addr.s_addr = INADDR_ANY;// Server IP address, INADDR_ANY will
work on the server side only
serv.sin_port = htons(portNum);
server_len = sizeof(serv);
// Binding of master socket to specified address and port
if (bind(master, (struct sockaddr *) &serv, sizeof (serv)) < 0) {
//Attempt to bind master socket to address
perror("Could not bind socket...");
exit(EXIT_FAILURE);
}
// Show what address and port is being used
char IP[INET_ADDRSTRLEN];
inet_ntop(AF_INET, &(serv.sin_addr), IP, INET_ADDRSTRLEN);// INADDR_ANY
allows all network interfaces so it will always show 0.0.0.0
std::cout << "Listening on port: " << htons(serv.sin_port) << ", and
address: " << IP << "..." << std::endl;
// Options specific to the server RPi
if(server){
std::cout << "Run Time: " << duration << " seconds." << std::endl;
client.sin_family = AF_INET;// Address family
inet_pton(AF_INET, clientIP.c_str(), &(client.sin_addr));
client.sin_port = htons(portNum);
client_len = sizeof(client);
serv_send = std::thread(&SocketServer::serverSend, this);
serv_send.detach();// The server send thread just runs continuously
serv_receive = std::thread(&SocketServer::serverReceive, this);
serv_receive.join();
}else{// Specific to client RPi
SocketServer::clientReceiveSend();
}
发送消息的代码:
// Setup the priority of this thread
param.sched_priority = thread_priority;
int result = sched_setscheduler(getpid(), SCHED_RR, ¶m);
if(result){
perror ("The following error occurred while setting serverSend() priority");
}
int ched = sched_getscheduler(getpid());
printf("serverSend() priority result %i : Scheduler priority id %i \n", result, ched);
std::ofstream Out;
std::ofstream Out1;
Out.open(file_name);
Out << duration << std::endl;
Out << frequency << std::endl;
Out << thread_priority << std::endl;
Out.close();
Out1.open("Server Side Send.txt");
packets_sent = 0;
Tbegin = std::chrono::high_resolution_clock::now();
// Send messages for a specified time period at a specified frequency
while(!stop){
// Setup the message to be sent
Tstart = std::chrono::high_resolution_clock::now();
TDEL = std::chrono::duration_cast< std::chrono::duration<double>>(Tstart - Tbegin); // Total time passed before sending message
memcpy(&message[0], &Tstart, sizeof(Tstart));// Send the time the message was sent with the message
memcpy(&message[8], &TDEL, sizeof(TDEL));// Send the time that had passed since Tstart
// Send the message to the client
T1 = std::chrono::high_resolution_clock::now();
sendto(master, &message, 16, MSG_DONTWAIT, (struct sockaddr *)&client, client_len);
T2 = std::chrono::high_resolution_clock::now();
T3 = std::chrono::duration_cast< std::chrono::duration<double>>(T2-T1);
Out1 << T3.count() << std::endl;
packets_sent++;
// Pause so that the required message send frequency is met
while(true){
Tend = std::chrono::high_resolution_clock::now();
Tdel = std::chrono::duration_cast< std::chrono::duration<double>>(Tend - Tstart);
if(Tdel.count() > 1/frequency){
break;
}
}
TDEL = std::chrono::duration_cast< std::chrono::duration<double>>(Tend - Tbegin);
// Check to see if the program has run as long as required
if(TDEL.count() > duration){
stop = true;
break;
}
}
std::cout << "Exiting serverSend() thread..." << std::endl;
// Save extra results to the end of the last file
Out.open(file_name, std::ios_base::app);
Out << packets_sent << "\t\t " << packets_returned << std::endl;
Out.close();
Out1.close();
std::cout << "^C to exit..." << std::endl;
答案 0 :(得分:2)
我已经解决了这个问题。这不是ARP表,因为即使禁用了ARP功能,也会出现周期性峰值。禁用ARP功能后,延迟只会出现一次峰值,而不是一系列延迟峰值。
事实证明,我使用的线程存在问题,因为CPU上有两个线程,一次只能处理一个线程。发送信息的一个线程受到正在接收信息的第二个线程的影响。我改变了很多线程优先级(发送优先级高于接收,接收高于发送和发送等于接收)无济于事。我现在买了一个有4个内核的Raspberry Pi,我已经设置了发送线程在核心2上运行,而接收线程在核心3上运行,防止线程相互干扰。这不仅消除了延迟峰值,还减少了我的设置的平均延迟。