与在GNU / Linux系统上直接写入磁盘相比,我试图在写入GPFS文件系统中的文件时比较聚合写入速率。对于我的应用程序,我需要测量原始速率,即不利用缓存。我不明白与dd一起使用的直接选项对旁路缓存的影响。当直接写入块设备时,与写入GPFS文件系统中的文件相比,当使用oflag = direct时,我的速率显着降低。为什么会这样?
为了测量聚合速率,我创建了运行dd的p进程,它们同时写入块设备或文件。然后我将获得的p率求和以得到总写入率。
结果:30个流程中每个流程的写入速率如下:
使用conv = fsync和oflag = direct写入GPFS文件系统中的文件,写入速率为~80MB / s
numprocs=30
directdiskrate=~/scratch/rate5
syncdiskrate=~/scratch/rate4
filerate=~/scratch/rate3
#using both conv=fsync and oflag=direct, each process gets a write rate of ~9MB/s
writetodiskdirect="dd if=/dev/zero of=/dev/sdac bs=256k count=4096 conv=fsync oflag=direct iflag=fullblock"
for p in $(seq $numprocs)
do
#parses output of dd, rate is on last line, each field separated by ,s
$writetodiskdirect 2>&1|tail -n 1|awk 'BEGIN { FS = "," } ; { print $3 }'|sed -e 's/MB\/s//g'>>$directdiskrate&
done
wait
#using only conv=fsync option, each process gets a write rate of ~180MB/s
writetodisksync="dd if=/dev/zero of=/dev/sdac bs=256k count=4096 conv=fsync iflag=fullblock"
for p in $(seq $numprocs)
do
#parses output of dd, rate is on last line, each field separated by ,s
$writetodisksync 2>&1|tail -n 1|awk 'BEGIN { FS = "," } ; { print $3 }'|sed -e 's/MB\/s//g'>>$syncdiskrate&
done
wait
#using both conv=fsync and oflag=direct for a file, gets a write rate of ~80MB/s
for p in $(seq $numprocs)
do
writetofile="dd if=/dev/zero of=/gpfs1/fileset6/file$p bs=256k count=4096 conv=fsync oflag=direct"
#parses output of dd, rate is on last line, each field separated by ,s
$writetofile 2>&1|tail -n 1|awk 'BEGIN { FS = "," } ; { print $3 }'|sed -e 's/MB\/s//g'>>$filerate&
done
wait
谢谢, 艾米