我不知道为什么test1和test2在CPU使用方面有很大的不同。 'top'报告运行test1时90%的cpu使用率,而运行test2时为100%。 test1的缓冲区大小只有1KB才能适合我的L1缓存,所以它不喜欢缓存缺失问题,我在每个源代码后附加cachegrind报告,最后是env。
我真的很困惑,任何意见都是受欢迎的。
-----测试1 -----------
const static int BUFFER_LENGTH=1024;
int main(int argc, char* argv[])
{
char buff[BUFFER_LENGTH];
for(uint64_t i =0; i < 100000000; ++i)
buff[(i*1)%BUFFER_LENGTH] = '1';
return 1;
}
==4813== I refs: 601,353,862
==4813== I1 misses: 1,032
==4813== LLi misses: 1,007
==4813== I1 miss rate: 0.00%
==4813== LLi miss rate: 0.00%
==4813==
==4813== D refs: 400,455,090 (300,341,408 rd + 100,113,682 wr)
==4813== D1 misses: 8,184 ( 6,978 rd + 1,206 wr)
==4813== LLd misses: 4,970 ( 4,047 rd + 923 wr)
==4813== D1 miss rate: 0.0% ( 0.0% + 0.0% )
==4813== LLd miss rate: 0.0% ( 0.0% + 0.0% )
==4813==
==4813== LL refs: 9,216 ( 8,010 rd + 1,206 wr)
==4813== LL misses: 5,977 ( 5,054 rd + 923 wr)
==4813== LL miss rate: 0.0% ( 0.0% + 0.0% )
-----测试2 -----------
int main(int argc, char* argv[])
{
for(uint64_t i = 0, a = 0; i < 100000000; ++i)
a++;
return 1;
}
==22081== I refs: 401,352,490
==22081== I1 misses: 1,010
==22081== LLi misses: 989
==22081== I1 miss rate: 0.00%
==22081== LLi miss rate: 0.00%
==22081==
==22081== D refs: 300,454,488 (300,340,997 rd + 113,491 wr)
==22081== D1 misses: 8,162 ( 6,966 rd + 1,196 wr)
==22081== LLd misses: 4,965 ( 4,043 rd + 922 wr)
==22081== D1 miss rate: 0.0% ( 0.0% + 1.0% )
==22081== LLd miss rate: 0.0% ( 0.0% + 0.8% )
==22081==
==22081== LL refs: 9,172 ( 7,976 rd + 1,196 wr)
==22081== LL misses: 5,954 ( 5,032 rd + 922 wr)
==22081== LL miss rate: 0.0% ( 0.0% + 0.8% )
测试环境:
CPU:Intel(R)Xeon(R)CPU E5606 @ 2.13GHz(4核* 2)
记忆:60G
OS:Linux版本2.6.18-164.el5(gcc版本4.1.2 20080704(Red Hat 4.1.2-46))#1 SMP Tue Tue Aug 18 15:51:48 EDT 2009