我写了一个简单的代码来测试prof
。
double bar_compute (double d) {
double t = std::abs(d);
t += std::sqrt(d);
t += std::cos(d);
return t;
}
// Do some computation n times
double foo_compute(unsigned n) {
std::random_device rd;
std::mt19937 mt(rd());
std::uniform_real_distribution<double> dist(0.0, 1.0);
double total = 0;
for (int i=0; i<n; i++) {
double d = dist(mt);
total += bar_compute(d);
}
return total;
}
当我运行prof
并查看输出时
56.14% runcode libm-2.23.so [.] __cos_avx
27.34% runcode runcode [.] _Z11foo_computej
13.92% runcode runcode [.] _Z11bar_computed
0.86% runcode libm-2.23.so [.] do_cos_slow.isra.1
0.44% runcode runcode [.] cos@plt
0.41% runcode libm-2.23.so [.] sloww1
0.35% runcode libm-2.23.so [.] __dubcos
0.17% runcode ld-2.23.so [.] _dl_lookup_symbol_x
do_cos_slow.isra
和sloww1
是什么意思?
我可以使用更快的cos
版本吗?否则为什么将其称为慢速?
答案 0 :(得分:6)
do_cos_slow
来自其在glibc/sysdeps/ieee754/dbl-64/s_sin.c中的声明。之所以称为do_cos_slow
,是因为它在Line 164声明上方的注释中比其基于do_cos
的函数更精确。
.isra
是因为该功能是IPA SRA根据以下Stack Overflow Answer, What does the GCC function suffix “isra” mean?
sloww1
是根据上面的注释计算sin(x + dx)的函数。
关于更快版本的cos,我不确定是否有更快的版本,但是如果将提供libm的glibc或libc实现更新为至少glibc 2.28,那么您将获得Wilco Dijkstra删除的结果这些慢路径功能和dosincos重构可以提高速度。
Refactor the sincos implementation - rather than rely on odd partial inlining
of preprocessed portions from sin and cos, explicitly write out the cases.
This makes sincos much easier to maintain and provides an additional 16-20%
speedup between 0 and 2^27. The overall speedup of sincos is 48% over this range.
Between 0 and PI it is 66% faster.
您可以尝试的其他替代方法是其他libc或libm实现,或其他avx_mathfun或avx_mathfun with some fixes for newer GCC或supersimd等cos实现。