我正在编写的代码实质上是利用SSE2来优化此代码的:
double *pA = a;
double *pB = b[voiceIndex];
double *pC = c[voiceIndex];
for (int sampleIndex = 0; sampleIndex < blockSize; sampleIndex++) {
pC[sampleIndex] = exp((mMin + std::clamp(pA[sampleIndex] + pB[sampleIndex], 0.0, 1.0) * mRange) * ln2per12);
}
在这里:
double *pA = a;
double *pB = b[voiceIndex];
double *pC = c[voiceIndex];
// SSE2
__m128d bound_lower = _mm_set1_pd(0.0);
__m128d bound_upper = _mm_set1_pd(1.0);
__m128d rangeLn2per12 = _mm_set1_pd(mRange * ln2per12);
__m128d minLn2per12 = _mm_set1_pd(mMin * ln2per12);
__m128d loaded_a = _mm_load_pd(pA);
__m128d loaded_b = _mm_load_pd(pB);
__m128d result = _mm_add_pd(loaded_a, loaded_b);
result = _mm_max_pd(bound_lower, result);
result = _mm_min_pd(bound_upper, result);
result = _mm_mul_pd(rangeLn2per12, result);
result = _mm_add_pd(minLn2per12, result);
double *pCEnd = pC + roundintup8(blockSize);
for (; pC < pCEnd; pA += 8, pB += 8, pC += 8) {
_mm_store_pd(pC, result);
loaded_a = _mm_load_pd(pA + 2);
loaded_b = _mm_load_pd(pB + 2);
result = _mm_add_pd(loaded_a, loaded_b);
result = _mm_max_pd(bound_lower, result);
result = _mm_min_pd(bound_upper, result);
result = _mm_mul_pd(rangeLn2per12, result);
result = _mm_add_pd(minLn2per12, result);
_mm_store_pd(pC + 2, result);
loaded_a = _mm_load_pd(pA + 4);
loaded_b = _mm_load_pd(pB + 4);
result = _mm_add_pd(loaded_a, loaded_b);
result = _mm_max_pd(bound_lower, result);
result = _mm_min_pd(bound_upper, result);
result = _mm_mul_pd(rangeLn2per12, result);
result = _mm_add_pd(minLn2per12, result);
_mm_store_pd(pC + 4, result);
loaded_a = _mm_load_pd(pA + 6);
loaded_b = _mm_load_pd(pB + 6);
result = _mm_add_pd(loaded_a, loaded_b);
result = _mm_max_pd(bound_lower, result);
result = _mm_min_pd(bound_upper, result);
result = _mm_mul_pd(rangeLn2per12, result);
result = _mm_add_pd(minLn2per12, result);
_mm_store_pd(pC + 6, result);
loaded_a = _mm_load_pd(pA + 8);
loaded_b = _mm_load_pd(pB + 8);
result = _mm_add_pd(loaded_a, loaded_b);
result = _mm_max_pd(bound_lower, result);
result = _mm_min_pd(bound_upper, result);
result = _mm_mul_pd(rangeLn2per12, result);
result = _mm_add_pd(minLn2per12, result);
}
我会说效果很好。但是,找不到SSE2的任何exp
函数来完成操作链。
阅读this,看来我需要从库中调用标准exp()
吗?
真的吗?这不是惩罚吗?还有其他方法吗?内置功能不同吗?
我在MSVC
,/arch:SSE2
,/O2
上,生成32位代码。
答案 0 :(得分:5)
有几个提供矢量化指数的库,其准确性或多或少。
从经验来看,所有这些方法都比自定义padde逼近算法更快,更精确(甚至不谈论不稳定的泰勒展开式,因为泰勒展开式会很快使您产生负数)。
对于SVML,IPP和MKL,我会检查哪种更好:从循环内部调用或对整个数组调用一次调用exp(因为库可以使用AVX512而不是仅使用SSE2)。
答案 1 :(得分:4)
最简单的方法是使用指数逼近。基于此限制的一种可能情况
对于n = 256 = 2^8
:
__m128d fastExp1(__m128d x)
{
__m128d ret = _mm_mul_pd(_mm_set1_pd(1.0 / 256), x);
ret = _mm_add_pd(_mm_set1_pd(1.0), ret);
ret = _mm_mul_pd(ret, ret);
ret = _mm_mul_pd(ret, ret);
ret = _mm_mul_pd(ret, ret);
ret = _mm_mul_pd(ret, ret);
ret = _mm_mul_pd(ret, ret);
ret = _mm_mul_pd(ret, ret);
ret = _mm_mul_pd(ret, ret);
ret = _mm_mul_pd(ret, ret);
return ret;
}
另一个想法是多项式展开。特别是taylor系列的扩展:
__m128d fastExp2(__m128d x)
{
const __m128d a0 = _mm_set1_pd(1.0);
const __m128d a1 = _mm_set1_pd(1.0);
const __m128d a2 = _mm_set1_pd(1.0 / 2);
const __m128d a3 = _mm_set1_pd(1.0 / 2 / 3);
const __m128d a4 = _mm_set1_pd(1.0 / 2 / 3 / 4);
const __m128d a5 = _mm_set1_pd(1.0 / 2 / 3 / 4 / 5);
const __m128d a6 = _mm_set1_pd(1.0 / 2 / 3 / 4 / 5 / 6);
const __m128d a7 = _mm_set1_pd(1.0 / 2 / 3 / 4 / 5 / 6 / 7);
__m128d ret = _mm_fmadd_pd(a7, x, a6);
ret = _mm_fmadd_pd(ret, x, a5);
// If fma extention is not present use
// ret = _mm_add_pd(_mm_mul_pd(ret, x), a5);
ret = _mm_fmadd_pd(ret, x, a4);
ret = _mm_fmadd_pd(ret, x, a3);
ret = _mm_fmadd_pd(ret, x, a2);
ret = _mm_fmadd_pd(ret, x, a1);
ret = _mm_fmadd_pd(ret, x, a0);
return ret;
}
请注意,在使用相同数量的扩展项时,如果使用例如最小二乘法估算特定x范围的函数,则可以获得更好的近似值。
所有这些方法都在非常有限的x范围内工作,但连续导数在某些情况下可能很重要。
有一个技巧可以在非常大的范围中逼近指数,但要具有明显的分段线性区域。它基于将整数重新解释为浮点数的方式。有关更准确的描述,我建议使用以下引用:
Piecewise linear approximation to exponential and logarithm
A Fast, Compact Approximation of the Exponential Function
此方法的可能实现:
__m128d fastExp3(__m128d x)
{
const __m128d a = _mm_set1_pd(1.0 / M_LN2);
const __m128d b = _mm_set1_pd(3 * 1024.0 - 1.05);
__m128d t = _mm_fmadd_pd(x, a, b);
return _mm_castsi128_pd(_mm_slli_epi64(_mm_castpd_si128(t), 11));
}
尽管此方法简单易行且x
范围很广,但在数学中使用时要小心。在小区域,它提供了分段近似,可以破坏敏感算法,尤其是使用微分的算法。
要比较不同方法的准确性,请查看图形。第一张图是针对x = [0..1)范围绘制的。如您所见,在这种情况下,最佳逼近由方法fastExp2(x)
给出,稍差一些,但可以接受fastExp1(x)
。 fastExp3(x)
提供的最差近似-分段结构很明显,一阶导数的不连续性是存在。
在x = [0..10)范围内,fastExp3(x)
方法提供了最佳逼近,而fastExp1(x)
给出的逼近则稍差一些-在相同数量的计算下,它提供的数量比{{1 }}。
下一步是提高fastExp2(x)
算法的准确性。显着提高精度的最简单方法是使用等式fastExp3(x)
,尽管它增加了计算量,但由于相除时的相互误差补偿,大大减少了误差。
exp(x) = exp(x/2)/exp(-x/2)
通过使用相等性__m128d fastExp5(__m128d x)
{
const __m128d ap = _mm_set1_pd(0.5 / M_LN2);
const __m128d an = _mm_set1_pd(-0.5 / M_LN2);
const __m128d b = _mm_set1_pd(3 * 1024.0 - 1.05);
__m128d tp = _mm_fmadd_pd(x, ap, b);
__m128d tn = _mm_fmadd_pd(x, an, b);
tp = _mm_castsi128_pd(_mm_slli_epi64(_mm_castpd_si128(tp), 11));
tn = _mm_castsi128_pd(_mm_slli_epi64(_mm_castpd_si128(tn), 11));
return _mm_div_pd(tp, tn);
}
组合fastExp1(x)
或fastExp2(x)
和fastExp3(x)
算法中的方法,甚至可以实现更高的准确性。如上所示,可以类似于exp(x+dx) = exp(x) *exp(dx)
方法来计算第一乘法器,因为可以使用第二乘法器fastExp3(x)
或fastExp1(x)
方法。在这种情况下,找到最佳解决方案是一项艰巨的任务,我建议您看一下答案中提出的库中的实现。
答案 2 :(得分:2)
没有exp的SSE2实现,因此,如果您不想按照上面的建议进行滚动,一种选择是在某些支持ERI的硬件上使用AVX512指令(指数和倒数指令)。参见https://en.wikipedia.org/wiki/AVX-512#New_instructions_in_AVX-512_exponential_and_reciprocal
我认为,目前您只能使用Xeon phi(正如Peter Cordes指出的那样-我确实找到了关于它在Skylake和Cannonlake上的说法,但无法证实),并且请牢记代码在其他架构上根本无法使用(即崩溃)。