所以我试图使用std :: chrono :: high_resolution_clock来计算执行时间需要多长时间。我想你可以找到开始时间和结束时间之间的差异......
为了检查我的方法,我做了以下程序:
#include <iostream>
#include <chrono>
#include <vector>
void long_function();
int main()
{
std::chrono::high_resolution_clock timer;
auto start_time = timer.now();
long_function();
auto end_time = timer.now();
auto diff_millis = std::chrono::duration_cast<std::chrono::duration<int, std::milli>>(end_time - start_time);
std::cout << "It took " << diff_millis.count() << "ms" << std::endl;
return 0;
}
void long_function()
{
//Should take a while to execute.
//This is calculating the first 100 million
//fib numbers and storing them in a vector.
//Well, it doesn't actually, because it
//overflows very quickly, but the point is it
//should take a few seconds to execute.
std::vector<unsigned long> numbers;
numbers.push_back(1);
numbers.push_back(1);
for(int i = 2; i < 100000000; i++)
{
numbers.push_back(numbers[i-2] + numbers[i-1]);
}
}
问题是,它实际上只输出了3000毫秒,而实际上并非如此。
在较短的问题上,它只输出0ms ......我做错了什么?
编辑:如果它有用,我正在使用带有-std = c ++ 0x标志的GNU GCC编译器
答案 0 :(得分:2)
high_resolution_clock的分辨率取决于平台。
打印以下内容可以让您了解所使用的实施的解决方案
std::cout << "It took " << std::chrono::nanoseconds(end_time - start_time).count() << std::endl;
答案 1 :(得分:1)
我在window7下遇到了与g ++(rev5,由MinGW-W64项目建立)4.8.1类似的问题。
int main()
{
auto start_time = std::chrono::high_resolution_clock::now();
int temp(1);
const int n(1e7);
for (int i = 0; i < n; i++)
temp += temp;
auto end_time = std::chrono::high_resolution_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::nanoseconds>(end_time - start_time).count() << " ns.";
return 0;
}
如果n = 1e7,则显示19999800 ns 但如果 n = 1e6,显示0 ns。
精度似乎很弱。