L1数据高速缓存丢失的原因是手臂上的受阻矩阵多?

时间:2018-10-07 10:02:33

标签: c caching arm matrix-multiplication micro-architecture

我尝试通过将整数矩阵分成较小的矩阵块来优化整数矩阵倍数,以在树莓派3b +上获得更好的缓存命中率(它是Cortex-A53内核,具有64行字节的缓存,具有4路关联性。{{ 3}})。

代码如下:

#define L1_D_CACHE_SZ 32 * 1024
size_t cache_tune_g = 32;

void mat_mul(int *A, int *B, int *C, size_t M, size_t N, size_t strideA, size_t strideB, size_t strideC) {

  for(int i = 0; i < M; i++) {
    int *Ai = A + (N + strideA) * i;
    for(int j = 0; j < M; j++) {
        int sum = 0;
        int *Bj = B + j;

        for (int k = 0; k < N; k++) {
            int *Aik = Ai + k;
            int *Bjk = Bj + (M + strideB) * k;
            sum += (*Aik) * (*Bjk);
        }

        int *Cij = C + (M + strideC) * i + j;
        *Cij = (*Cij) + sum;
    }
  }
}

// if B 'fits' into L1 data cache, then do the multiplication, 
// else divide A and B into 4 sub-matrixes and then call itself recursively.
void mat_mul_opt(int *A, int *B, int *C, size_t M, size_t N, size_t strideA, size_t strideB, size_t strideC) {
  int B_size = sizeof(int) * M * N;
  if (B_size < L1_D_CACHE_SZ/cache_tune_g) {
    mat_mul(A, B, C, M, N, strideA, strideB, strideC);
  } else {
    size_t M_sub = M / 2;
    size_t N_sub = N / 2;
    size_t strideA_sub = N_sub + strideA;
    size_t strideB_sub = M_sub + strideB;
    size_t strideC_sub = M_sub + strideC;

    int *A1 = A;
    int *A2 = A + N_sub;
    int *A3 = A + (N + strideA) * M_sub;
    int *A4 = A3 + N_sub;

    int *B1 = B;
    int *B2 = B + M_sub;
    int *B3 = B + (M + strideB) * N_sub;
    int *B4 = B3 + M_sub;

    int *C1 = C;
    int *C2 = C + M_sub;
    int *C3 = C + (M + strideC) * M_sub;
    int *C4 = C3 + M_sub;

    // due to the result in C is accumulated, order here matters.
    mat_mul_opt(A1, B1, C1, M_sub, N_sub, strideA_sub, strideB_sub, strideC_sub);
    mat_mul_opt(A2, B3, C1, M_sub, N_sub, strideA_sub, strideB_sub, strideC_sub);

    mat_mul_opt(A1, B2, C2, M_sub, N_sub, strideA_sub, strideB_sub, strideC_sub);
    mat_mul_opt(A2, B4, C2, M_sub, N_sub, strideA_sub, strideB_sub, strideC_sub);

    mat_mul_opt(A3, B1, C3, M_sub, N_sub, strideA_sub, strideB_sub, strideC_sub);
    mat_mul_opt(A4, B3, C3, M_sub, N_sub, strideA_sub, strideB_sub, strideC_sub);

    mat_mul_opt(A3, B2, C4, M_sub, N_sub, strideA_sub, strideB_sub, strideC_sub);
    mat_mul_opt(A4, B4, C4, M_sub, N_sub, strideA_sub, strideB_sub, strideC_sub);
  }
}

这是性能结果:

 1,244,238,488      cache-references:u                                            (87.41%)
   193,808,545      cache-misses:u            #   15.576 % of all cache refs      (87.42%)
   192,979,016      L1-dcache-load-misses:u                                       (75.14%)
 6,651,396,875      cycles:u                                                      (87.59%)
 3,499,761,427      instructions:u            #    0.53  insn per cycle           (87.62%)
   539,801,098      branches:u                                                    (87.62%)                                            
     1,632,374      armv7_cortex_a7/l2d_cache_refill/:u                                     (87.48%)

   4.847838433 seconds time elapsed

在测试中,我将A设置为1024x512,将B设置为512x1024。然后到达262144函数的mat_mul调用,并且在MxN的最后调用中16x8mat_mul

我对缓存丢失的计算比性能结果,这里是:

因为矩阵A为16x8,而B为8x16,则B的每一行(16* sizeof(int) = 64 Byte)都适合一个L1高速缓存行。而且,A和B现在都应该适合L1高速缓存(16*8*2*sizeof(int) = 1024 Byte,我假设有32KB L1D高速缓存,并且关联被认为是4向的,1024 Byte也应该可以适合它)。因此,mat_mul中具有A(16x8和B(8x16)的计算应包含16 + 8 = 24个L1缓存缺失。因此在整个计算中有262,144 * 24 = 6,291,456个缓存丢失。

但是perf的结果表明存在192,979,016个缓存丢失。是我预期的30倍。

所以我的问题是我在这里的计算出了什么问题?还是缺少额外的缓存的地方是哪里?

我还使用perf来记录丢失的L1 D缓存来自何处,结果如下所示。如果从mat_mul中丢失了99%,而在mat_mul中丢失了80%,则来自最内层的循环行:sum += (*Aik) * (*Bjk);

  1.21 │ 9c:┌─→ldr    r0, [r3], #4                                                                                                                                           
  2.84 │    │  ldr    ip, [r1], fp                                                                                                                                           
       │    │  cmp    lr, r3                                                                                                                                                 
 80.42 │    │  mla    r2, ip, r0, r2                                                                                                                                         
       │    └──bne    9c               

谢谢!

0 个答案:

没有答案