我正在玩新的c ++标准。我编写了一个测试来观察调度算法的行为,看看线程发生了什么。考虑到上下文切换时间,我希望特定线程的实时等待时间比std::this_thread::sleep_for()
函数指定的值多一点。但令人惊讶的是,它有时甚至比睡眠时间还少!我无法弄清楚为什么会发生这种情况,或者我做错了什么......
#include <iostream>
#include <thread>
#include <random>
#include <vector>
#include <functional>
#include <math.h>
#include <unistd.h>
#include <sys/time.h>
void heavy_job()
{
// here we're doing some kind of time-consuming job..
int j=0;
while(j<1000)
{
int* a=new int[100];
for(int i=0; i<100; ++i)
a[i] = i;
delete[] a;
for(double x=0;x<10000;x+=0.1)
sqrt(x);
++j;
}
std::cout << "heavy job finished" << std::endl;
}
void light_job(const std::vector<int>& wait)
{
struct timeval start, end;
long utime, seconds, useconds;
std::cout << std::showpos;
for(std::vector<int>::const_iterator i = wait.begin();
i!=wait.end();++i)
{
gettimeofday(&start, NULL);
std::this_thread::sleep_for(std::chrono::microseconds(*i));
gettimeofday(&end, NULL);
seconds = end.tv_sec - start.tv_sec;
useconds = end.tv_usec - start.tv_usec;
utime = ((seconds) * 1000 + useconds/1000.0);
double delay = *i - utime*1000;
std::cout << "delay: " << delay/1000.0 << std::endl;
}
}
int main()
{
std::vector<int> wait_times;
std::uniform_int_distribution<unsigned int> unif;
std::random_device rd;
std::mt19937 engine(rd());
std::function<unsigned int()> rnd = std::bind(unif, engine);
for(int i=0;i<1000;++i)
wait_times.push_back(rnd()%100000+1); // random sleep time between 1 and 1 million µs
std::thread heavy(heavy_job);
std::thread light(light_job,wait_times);
light.join();
heavy.join();
return 0;
}
我的Intel Core-i5机器上的输出:
.....
delay: +0.713
delay: +0.509
delay: -0.008 // !
delay: -0.043 // !!
delay: +0.409
delay: +0.202
delay: +0.077
delay: -0.027 // ?
delay: +0.108
delay: +0.71
delay: +0.498
delay: +0.239
delay: +0.838
delay: -0.017 // also !
delay: +0.157
答案 0 :(得分:3)
您的计时代码导致整数截断。
utime = ((seconds) * 1000 + useconds/1000.0);
double delay = *i - utime*1000;
假设您的等待时间是888888微秒,并且您正好睡觉了这个数量。 seconds
将为0,useconds
将为888888
。除以1000.0
后,您获得888.888
。然后添加0*1000
,仍然会产生888.888
。然后会将其分配为长888
,明显延迟为888.888 - 888 = 0.888
。
您应该更新utime
以实际存储微秒,这样您就不会得到截断,也因为名称暗示单位是微秒,就像useconds
一样。类似的东西:
long utime = seconds * 1000000 + useconds;
你也有倒退的延迟计算。忽略截断的影响,应该是:
double delay = utime*1000 - *i;
std::cout << "delay: " << delay/1000.0 << std::endl;
你得到它的方式,你输出的所有正延迟实际上是截断的结果,而负数表示实际的延迟。