我正在尝试将我拥有的循环转换为SSE内在函数。我似乎已经取得了相当不错的进展,而且我的意思是它是在正确的方向但是我似乎在某处做了一些错误的翻译,因为我没有得到非sse代码导致的相同的“正确”答案。
我以4倍展开的原始循环如下所示:
int unroll_n = (N/4)*4;
for (int j = 0; j < unroll_n; j++) {
for (int i = 0; i < unroll_n; i+=4) {
float rx = x[j] - x[i];
float ry = y[j] - y[i];
float rz = z[j] - z[i];
float r2 = rx*rx + ry*ry + rz*rz + eps;
float r2inv = 1.0f / sqrt(r2);
float r6inv = r2inv * r2inv * r2inv;
float s = m[j] * r6inv;
ax[i] += s * rx;
ay[i] += s * ry;
az[i] += s * rz;
//u
rx = x[j] - x[i+1];
ry = y[j] - y[i+1];
rz = z[j] - z[i+1];
r2 = rx*rx + ry*ry + rz*rz + eps;
r2inv = 1.0f / sqrt(r2);
r6inv = r2inv * r2inv * r2inv;
s = m[j] * r6inv;
ax[i+1] += s * rx;
ay[i+1] += s * ry;
az[i+1] += s * rz;
//unroll i 3
rx = x[j] - x[i+2];
ry = y[j] - y[i+2];
rz = z[j] - z[i+2];
r2 = rx*rx + ry*ry + rz*rz + eps;
r2inv = 1.0f / sqrt(r2);
r6inv = r2inv * r2inv * r2inv;
s = m[j] * r6inv;
ax[i+2] += s * rx;
ay[i+2] += s * ry;
az[i+2] += s * rz;
//unroll i 4
rx = x[j] - x[i+3];
ry = y[j] - y[i+3];
rz = z[j] - z[i+3];
r2 = rx*rx + ry*ry + rz*rz + eps;
r2inv = 1.0f / sqrt(r2);
r6inv = r2inv * r2inv * r2inv;
s = m[j] * r6inv;
ax[i+3] += s * rx;
ay[i+3] += s * ry;
az[i+3] += s * rz;
}
}
然后我基本上逐行进入顶部并将其转换为SSE内在函数。代码如下。我不完全确定是否需要前三行但是我知道我的数据需要16位对齐才能正常且最佳地工作。
float *x = malloc(sizeof(float) * N);
float *y = malloc(sizeof(float) * N);
float *z = malloc(sizeof(float) * N);
for (int j = 0; j < N; j++) {
for (int i = 0; i < N; i+=4) {
__m128 xj_v = _mm_set1_ps(x[j]);
__m128 xi_v = _mm_load_ps(&x[i]);
__m128 rx_v = _mm_sub_ps(xj_v, xi_v);
__m128 yj_v = _mm_set1_ps(y[j]);
__m128 yi_v = _mm_load_ps(&y[i]);
__m128 ry_v = _mm_sub_ps(yj_v, yi_v);
__m128 zj_v = _mm_set1_ps(z[j]);
__m128 zi_v = _mm_load_ps(&z[i]);
__m128 rz_v = _mm_sub_ps(zj_v, zi_v);
__m128 r2_v = _mm_mul_ps(rx_v, rx_v) + _mm_mul_ps(ry_v, ry_v) + _mm_mul_ps(rz_v, rz_v) + _mm_set1_ps(eps);
__m128 r2inv_v = _mm_div_ps(_mm_set1_ps(1.0f),_mm_sqrt_ps(r2_v));
__m128 r6inv_1v = _mm_mul_ps(r2inv_v, r2inv_v);
__m128 r6inv_v = _mm_mul_ps(r6inv_1v, r2inv_v);
__m128 mj_v = _mm_set1_ps(m[j]);
__m128 s_v = _mm_mul_ps(mj_v, r6inv_v);
__m128 axi_v = _mm_load_ps(&ax[i]);
__m128 ayi_v = _mm_load_ps(&ay[i]);
__m128 azi_v = _mm_load_ps(&az[i]);
__m128 srx_v = _mm_mul_ps(s_v, rx_v);
__m128 sry_v = _mm_mul_ps(s_v, ry_v);
__m128 srz_v = _mm_mul_ps(s_v, rz_v);
axi_v = _mm_add_ps(axi_v, srx_v);
ayi_v = _mm_add_ps(ayi_v, srx_v);
azi_v = _mm_add_ps(azi_v, srx_v);
_mm_store_ps(ax, axi_v);
_mm_store_ps(ay, ayi_v);
_mm_store_ps(az, azi_v);
}
}
我觉得主要想法是正确的,但是因为结果答案不正确,所以某处有/某些错误。
提前致谢。
答案 0 :(得分:2)
我认为你唯一的错误是简单的错别字,而不是逻辑错误,见下文。
你不能只使用clang
的自动矢量化吗?或者您是否需要使用gcc来获取此代码?自动矢量化可让您在不做任何修改的情况下从同一来源制作SSE,AVX和(将来)AVX512版本。不幸的是,内在函数不能扩展到不同的向量大小。
基于你在矢量化的开始,我做了一个优化版本。你应该尝试一下,我很想知道它是否比你的版本有错误修正或clang的自动矢量化版本更快。 :)
_mm_store_ps(ax, axi_v);
_mm_store_ps(ay, ayi_v);
_mm_store_ps(az, azi_v);
您从ax[i]
加载,但现在存储到ax[0]
。
此外,clang's unused-variable warning发现了这个错误:
axi_v = _mm_add_ps(axi_v, srx_v);
ayi_v = _mm_add_ps(ayi_v, srx_v); // should be sry_v
azi_v = _mm_add_ps(azi_v, srx_v); // should be srz_v
就像我在上一个问题的答案中所说,你应该交换循环,所以相同的ax [i + 0..3],ay [i + 0..3]和az [i + 0使用..3],避免加载/存储。
此外,如果您不打算使用rsqrtps
+ Newton-Raphson,则应使用我在上一个答案中指出的转换:将m[j]
除以sqrt(k2) ^3
。将1.0f
除以使用divps
的内容,然后再乘以一次就没有意义了。
rsqrt
可能实际上并不是一个胜利,因为总的uop吞吐量可能比div / sqrt吞吐量或延迟更多地成为瓶颈。 three multiplies + FMA + rsqrtps比sqrtps + divps明显更多uops。 rsqrt
对AVX 256b向量更有帮助,因为在/到Broadwell的SnB上,除法/ sqrt单位不是全宽度。 Skylake具有12c延迟sqrtps ymm
,与xmm相同,但xmm的吞吐量仍然更好(每3c一个而不是每6c一个)。
clang and gcc were both using rsqrtps
/ rsqrtss
when compiling your code with -ffast-math
。 (当然,只使用打包版本。)
如果不交换循环,则应手动提升仅依赖j
的内循环。编译器通常都很擅长这一点,但是让源代码反映出编译器能够做到的事情似乎仍然是个好主意。这有助于“看到”循环实际在做什么。
这是一个版本,对原版进行了一些优化:
为了让gcc / clang融合mul /添加到FMA中,我使用了-ffp-contract=fast
。这样就可以在不使用-ffast-math
的情况下获得高吞吐量的FMA指令。 (与三个独立的累加器有很多并行性,因此与addps
相比,FMA的延迟增加不应该受到任何影响。我期望port0 / 1吞吐量是这里的限制因素。)I {{3但是,似乎没有-ffast-math
。
请注意,v 3/2 = sqrt(v) 3 = sqrt(v)* v。这具有较低的延迟和较少的指令。
互换循环,并在内循环中使用广播加载来改善局部性(使用AVX将带宽要求降低4或8)。内循环的每次迭代仅从四个源阵列中的每一个读取4B新数据。 (x,y,z和m)。因此,当它很热时,它会大量使用每个缓存行。
在内循环中使用广播加载也意味着我们并行累积ax[i + 0..3]
,避免需要thought gcc did this automatically。 (请参阅此答案的先前版本代码horizontal sum, which takes extra code。)
编译得很好with the loops interchanged, but that used vector loads in the inner loop, with stride = 16B。 (尽管仍然只有128b矢量大小,不像clang的自动矢量化256b版本)。内部循环只有20条指令,其中只有13条指令是需要在Intel SnB系列上使用端口0/1的FPU ALU指令。即使使用基线SSE2,它也会产生合适的代码:没有FMA,并且需要shufps用于广播负载,但是这些不会与add / mul竞争执行单元。
#include <immintrin.h>
void ffunc_sse128(float *restrict ax, float *restrict ay, float *restrict az,
const float *x, const float *y, const float *z,
int N, float eps, const float *restrict m)
{
for (int i = 0; i < N; i+=4) {
__m128 xi_v = _mm_load_ps(&x[i]);
__m128 yi_v = _mm_load_ps(&y[i]);
__m128 zi_v = _mm_load_ps(&z[i]);
// vector accumulators for ax[i + 0..3] etc.
__m128 axi_v = _mm_setzero_ps();
__m128 ayi_v = _mm_setzero_ps();
__m128 azi_v = _mm_setzero_ps();
// AVX broadcast-loads are as cheap as normal loads,
// and data-reuse meant that stand-alone load instructions were used anyway.
// so we're not even losing out on folding loads into other insns
// An inner-loop stride of only 4B is a huge win if memory / cache bandwidth is a bottleneck
// even without AVX, the shufps instructions are cheap,
// and don't compete with add/mul for execution units on Intel
for (int j = 0; j < N; j++) {
__m128 xj_v = _mm_set1_ps(x[j]);
__m128 rx_v = _mm_sub_ps(xj_v, xi_v);
__m128 yj_v = _mm_set1_ps(y[j]);
__m128 ry_v = _mm_sub_ps(yj_v, yi_v);
__m128 zj_v = _mm_set1_ps(z[j]);
__m128 rz_v = _mm_sub_ps(zj_v, zi_v);
__m128 mj_v = _mm_set1_ps(m[j]); // do the load early
// sum of squared differences
__m128 r2_v = _mm_set1_ps(eps) + rx_v*rx_v + ry_v*ry_v + rz_v*rz_v; // GNU extension
/* __m128 r2_v = _mm_add_ps(_mm_set1_ps(eps), _mm_mul_ps(rx_v, rx_v));
r2_v = _mm_add_ps(r2_v, _mm_mul_ps(ry_v, ry_v));
r2_v = _mm_add_ps(r2_v, _mm_mul_ps(rz_v, rz_v));
*/
// rsqrt and a Newton-Raphson iteration might have lower latency
// but there's enough surrounding instructions and cross-iteration parallelism
// that the single-uop sqrtps and divps instructions prob. aren't be a bottleneck
__m128 r2sqrt = _mm_sqrt_ps(r2_v);
__m128 r6sqrt = _mm_mul_ps(r2_v, r2sqrt); // v^(3/2) = sqrt(v)^3 = sqrt(v)*v
__m128 s_v = _mm_div_ps(mj_v, r6sqrt);
__m128 srx_v = _mm_mul_ps(s_v, rx_v);
__m128 sry_v = _mm_mul_ps(s_v, ry_v);
__m128 srz_v = _mm_mul_ps(s_v, rz_v);
axi_v = _mm_add_ps(axi_v, srx_v);
ayi_v = _mm_add_ps(ayi_v, sry_v);
azi_v = _mm_add_ps(azi_v, srz_v);
}
_mm_store_ps(&ax[i], axi_v);
_mm_store_ps(&ay[i], ayi_v);
_mm_store_ps(&az[i], azi_v);
}
}
我也尝试了for Haswell with gcc, using FMA,但IDK会更快。请注意,对于-ffast-math
,gcc和clang会将除法转换为rcpps
+牛顿迭代。 (由于某种原因,他们不会将1.0f/sqrtf(x)
转换为rsqrt
+牛顿,即使是在独立的函数中也是如此。 clang做得更好,使用FMA进行迭代步骤。
#define USE_RSQRT
#ifndef USE_RSQRT
// even with -mrecip=vec-sqrt after -ffast-math, this still does sqrt(v)*v, then rcpps
__m128 r2sqrt = _mm_sqrt_ps(r2_v);
__m128 r6sqrt = _mm_mul_ps(r2_v, r2sqrt); // v^(3/2) = sqrt(v)^3 = sqrt(v)*v
__m128 s_v = _mm_div_ps(mj_v, r6sqrt);
#else
__m128 r2isqrt = rsqrt_float4_single(r2_v);
// can't use the sqrt(v)*v trick, unless we either do normal sqrt first then rcpps
// or rsqrtps and rcpps. Maybe it's possible to do a Netwon Raphson iteration on that product
// instead of refining them both separately?
__m128 r6isqrt = r2isqrt * r2isqrt * r2isqrt;
__m128 s_v = _mm_mul_ps(mj_v, r6isqrt);
#endif