linux磁盘顺序写、随机写的方法
一、前言
●随机写会导致磁头不停地换道,造成效率的极大降低;顺序写磁头几乎不用换道,或者换道的时间很短
●本文来讨论一下两者具体的差别以及相应的内核调用
二、环境准备
组件 | 版本 |
---|---|
OS | Ubuntu16.04.4LTS |
fio | 2.2.10 |
三、fio介绍
通过fio测试,能够反映在读写中的状态,我们需要重点关注fio的输出报告中的几个关键指标:
slat:是指从I/O提交到实际执行I/O的时长(Submissionlatency)
clat:是指从I/O提交到I/O完成的时长(Completionlatency)
lat:指的是从fio创建I/O到I/O完成的总时长
bw:吞吐量
iops:每秒I/O的次数
四、同步写测试
(1)同步随机写
主要采用fio作为测试工具,为了能够看到系统调用,使用strace工具,命令看起来是这样:
先来测试一个随机写
strace-f-tt-o/tmp/randwrite.log-Dfio-name=randwrite-rw=randwrite\ -direct=1-bs=4k-size=1G-numjobs=1-group_reporting-filename=/tmp/test.db
提取关键信息
root@wilson-ubuntu:~#strace-f-tt-o/tmp/randwrite.log-Dfio-name=randwrite-rw=randwrite\ >-direct=1-bs=4k-size=1G-numjobs=1-group_reporting-filename=/tmp/test.db randwrite:(g=0):rw=randwrite,bs=4K-4K/4K-4K/4K-4K,ioengine=sync,iodepth=1 fio-2.2.10 Starting1process ... randwrite:(groupid=0,jobs=1):err=0:pid=26882:WedAug1410:39:022019 write:io=1024.0MB,bw=52526KB/s,iops=13131,runt=19963msec clat(usec):min=42,max=18620,avg=56.15,stdev=164.79 lat(usec):min=42,max=18620,avg=56.39,stdev=164.79 ... bw(KB/s):min=50648,max=55208,per=99.96%,avg=52506.03,stdev=1055.83 ... Runstatusgroup0(alljobs): WRITE:io=1024.0MB,aggrb=52525KB/s,minb=52525KB/s,maxb=52525KB/s,mint=19963msec,maxt=19963msec Diskstats(read/write): ... sda:ios=0/262177,merge=0/25,ticks=0/7500,in_queue=7476,util=36.05%
列出了我们需要重点关注的信息:
(1)clat,平均时长56ms左右
(2)lat,平均时长56ms左右
(3)bw,吞吐量,大概在52M左右
再来看内核调用信息:
root@wilson-ubuntu:~#more/tmp/randwrite.log ... 2688210:38:41.919904lseek(3,665198592,SEEK_SET)=665198592 2688210:38:41.919920write(3,"\220\240@\6\371\341\277>\0\200\36\31\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"...,4096)=4096 2688210:38:41.919969lseek(3,4313088,SEEK_SET)=4313088 2688210:38:41.919985write(3,"\220\240@\6\371\341\277>\0\200\36\31\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"...,4096)=4096 2688210:38:41.920032lseek(3,455880704,SEEK_SET)=455880704 2688210:38:41.920048write(3,"\220\240@\6\371\341\277>\0\200\36\31\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"...,4096)=4096 2688210:38:41.920096lseek(3,338862080,SEEK_SET)=338862080 2688210:38:41.920112write(3,"\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"...,4096)=4096 2688210:38:41.920161lseek(3,739086336,SEEK_SET)=739086336 2688210:38:41.920177write(3,"\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"...,4096)=4096 2688210:38:41.920229lseek(3,848175104,SEEK_SET)=848175104 2688210:38:41.920245write(3,"\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"...,4096)=4096 2688210:38:41.920296lseek(3,1060147200,SEEK_SET)=1060147200 2688210:38:41.920312write(3,"\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"...,4096)=4096 2688210:38:41.920362lseek(3,863690752,SEEK_SET)=863690752 2688210:38:41.920377write(3,"\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"...,4096)=4096 2688210:38:41.920428lseek(3,279457792,SEEK_SET)=279457792 2688210:38:41.920444write(3,"\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"...,4096)=4096 2688210:38:41.920492lseek(3,271794176,SEEK_SET)=271794176 2688210:38:41.920508write(3,"\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"...,4096)=4096 2688210:38:41.920558lseek(3,1067864064,SEEK_SET)=1067864064 2688210:38:41.920573write(3,"\220\240@\6\371\341\277>\0\2402\24\0\0\0\0\202\2\7\320\343\6H\26P\340\277\370\330\30e\30"...,4096)=4096 ...
随机读每一次写入之前都要通过lseek去定位当前的文件偏移量
同步顺序写
用刚才的方法来测试顺序写
root@wilson-ubuntu:~#strace-f-tt-o/tmp/write.log-Dfio-name=write-rw=write\ -direct=1-bs=4k-size=1G-numjobs=1-group_reporting-filename=/tmp/test.db write:(g=0):rw=write,bs=4K-4K/4K-4K/4K-4K,ioengine=sync,iodepth=1 fio-2.2.10 Starting1process Jobs:1(f=1):[W(1)][100.0%done][0KB/70432KB/0KB/s][0/17.7K/0iops][eta00m:00s] write:(groupid=0,jobs=1):err=0:pid=27005:WedAug1410:53:022019 write:io=1024.0MB,bw=70238KB/s,iops=17559,runt=14929msec clat(usec):min=43,max=7464,avg=55.95,stdev=56.24 lat(usec):min=43,max=7465,avg=56.15,stdev=56.25 ... bw(KB/s):min=67304,max=72008,per=99.98%,avg=70225.38,stdev=1266.88 ... Runstatusgroup0(alljobs): WRITE:io=1024.0MB,aggrb=70237KB/s,minb=70237KB/s,maxb=70237KB/s,mint=14929msec,maxt=14929msec Diskstats(read/write): ... sda:ios=0/262162,merge=0/10,ticks=0/6948,in_queue=6932,util=46.49%
可以看到:
吞吐量提升至70M左右
再来看一下内核调用:
root@wilson-ubuntu:~#more/tmp/write.log ... 2704610:54:28.194508write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\360\t\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.194568write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.194627write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.194687write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.194747write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.194807write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.194868write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.194928write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.194988write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195049write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195110write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195197write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195262write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195330write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195426write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195497write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195567write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195637write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195704write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195757write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195807write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195859write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195910write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.195961write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.196012write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.196062write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\220\24\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.196112write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\26\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.196162write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\26\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.196213write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\26\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.196265write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\26\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.196314write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\26\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.196363write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\26\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.196414write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\26\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.196472write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\26\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.196524write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\26\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 2704610:54:28.196573write(3,"\0\0\23\0\0\0\0\0\0\300\16\0\0\0\0\0\0\26\0\0\0\0\0\0\320\17\0\0\0\0\0"...,4096)=4096 ...
由于顺序读,不需要反复定位文件偏移量,所以能够专注于写操作
五、slat指标
从上面的测试,我们在fio的测试报告中,并没有发现slat的身影,那是由于上述都是同步操作,对同步I/O来说,由于I/O提交和I/O完成是一个动作,所以slat实际上就是I/O完成的时间
异步顺序写,将同步顺序写的命令添加-ioengine=libaio:
root@wilson-ubuntu:~#fio-name=write-rw=write-ioengine=libaio-direct=1-bs=4k-size=1G-numjobs=1-group_reporting-filename=/tmp/test.db write:(g=0):rw=write,bs=4K-4K/4K-4K/4K-4K,ioengine=libaio,iodepth=1 fio-2.2.10 Starting1process Jobs:1(f=1):[W(1)][100.0%done][0KB/119.3MB/0KB/s][0/30.6K/0iops][eta00m:00s] write:(groupid=0,jobs=1):err=0:pid=27258:WedAug1411:14:362019 write:io=1024.0MB,bw=120443KB/s,iops=30110,runt=8706msec slat(usec):min=3,max=70,avg=4.31,stdev=1.56 clat(usec):min=0,max=8967,avg=28.13,stdev=55.68 lat(usec):min=22,max=8976,avg=32.53,stdev=55.72 ... bw(KB/s):min=118480,max=122880,per=100.00%,avg=120467.29,stdev=1525.68 ... Runstatusgroup0(alljobs): WRITE:io=1024.0MB,aggrb=120442KB/s,minb=120442KB/s,maxb=120442KB/s,mint=8706msec,maxt=8706msec Diskstats(read/write): ... sda:ios=0/262147,merge=0/1,ticks=0/6576,in_queue=6568,util=74.32%
可以看到,slat指标出现,lat近似等于slat+clat之和(avg平均值);并且换成异步io之后,吞吐量得到了极大的提升,120M左右
六、总结
●fio应该作为磁盘的baseline工具,拿到机器(物理机或者云机器)都应该第一时间对机器的磁盘做一个基线测试,做到心中有数
●本文所有的测试,都是绕开了缓存,在实际应用中需要将缓存的影响考虑进去
以上所述是小编给大家介绍的linux磁盘顺序写、随机写,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对毛票票网站的支持!
如果你觉得本文对你有帮助,欢迎转载,烦请注明出处,谢谢!