我已经使用数组编写了C代码,以了解intel i7 8750上缓存的行为,其中L1d = 32k,L2 = 258k,L3:912k,行大小为64字节,并且Set Size = 8。The trend I see for my code 我试图了解我从代码输出中获得的输出。 如果LRU是缓存的替换策略,那么在我的代码中还可以做些什么以确保获得最少的缓存未命中?
#include<stdio.h>
#include<string.h>
#include<unistd.h>
#include<stdlib.h>
#include<time.h>
#define BILLION 1000000000L
struct student
{
char name[64];
};
int main(int argc, char* argv[])
{
int m, i, p;
char* n;
char mn[64];
u_int64_t diff;
struct timespec start, end;
m = strtol(argv[1], &n, 0);
struct student* arr_student = malloc(m * sizeof(struct student));
for(u_int64_t i = 0; i < m; i++ )
{
strcpy(arr_student[i].name, "abc");
}
/* 100 runs to ensure cache warmup and linear access time calculation*/
for (int j = 0; j<100; j++){
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start);
for(u_int64_t i = 0; i < m; i+=8){
strcpy(mn,arr_student[i].name);
if(i < (m-8)){
strcpy(mn,arr_student[i+1].name);
strcpy(mn,arr_student[i+2].name);
strcpy(mn,arr_student[i+3].name);
strcpy(mn,arr_student[i+4].name);
strcpy(mn,arr_student[i+5].name);
strcpy(mn,arr_student[i+6].name);
strcpy(mn,arr_student[i+7].name);
}
}
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end);
}
diff = BILLION * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
printf("Time take for linear read operation only: %llu nanoseconds\n", (long long unsigned int) diff / 8 );
free(arr_student);
return 0;
}
我看到一种趋势,随着数组大小的增加,循环执行跨度为8的执行时间会花费越来越多的时间。我期望它保持恒定,并且仅在CPU必须在L2中查看时才增加,即当阵列大小增长到超出L1可以容纳的范围时。我希望看到这样的结果:https://www.google.com/search?q=cache+performance+trend+l1+l2&rlz=1C1GCEA_enUS831US831&source=lnms&tbm=isch&sa=X&ved=0ahUKEwi9jqqApYrgAhXYFjQIHR39BtwQ_AUIDygC&biw=1280&bih=913#imgrc=5JVNAazx3drZvM:
为什么将diff除以m会得到逆趋势?我不明白这种趋势。
请帮助?
答案 0 :(得分:0)
以下是有关内存对齐和代码优化的一些有用技巧:
通常,代码优化是时间和经验的问题。