IO Plumbing tests with FIO

Flexible IO tester aka FIO is a open-source synthetic benchmark tool initially developed by Jens Axboe and now updated by various developers.   FIO can generate various IO type workloads be it sequential reads or random writes, synchronous or asynchronous, based on the options provided by the user.  FIO provides various global options through which different type of workloads can be generated.  FIO is the easiest and versatile tool to quickly perform IO plumbing tests on storage system.

FIO enables ease of generating sequential or random IO workload with varying number of threads and the percentage of reads and writes for specific block size to mimic real world workload . FIO also has the option to generate very detailed output.  By default it provides key metrics output like IOPS, latency and throughput.

Things to consider

To avoid I/Os out of host system cache, use the direct option which will directly read/write to the disk.  Use the Linux native asynchronous IO using the ioengine directive with libaio.

When FIO is invoked, it will create the file with the name provided in –name to the size as provided in –size with block size as –bs.  If the –numjobs are provided, it will create the files in the format of name.n.0 where n will be between 0 and –numjobs.

–jobs = More the jobs, higher the performance can be (based on the resource availability).  If your server is limited on the resources (TCP or FC), run fio across multiple servers to push more workload to the storage subsystem.

–time_based = FIO will run all the way till runtime value.

Software

At this time I am using fio on RHEL 7.0.  You should be able to find the relevant rpm at this site.

FIO Cheat sheet

  1. Sequential Reads – Async mode – 8K block size – Direct IO – 100% Reads
fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=8k --numjobs=8 --size=1G --runtime=600  --group_reporting

2. Sequential Writes – Async mode – 32K block size – Direct IO – 100% Writes

fio --name=seqwrite --rw=write --direct=1 --ioengine=libaio --bs=32k --numjobs=4 --size=2G --runtime=600 --group_reporting

3. Random Reads – Async mode – 8K block size – Direct IO – 100% Reads

fio --name=randread --rw=randread --direct=1 --ioengine=libaio --bs=8k --numjobs=16 --size=1G --runtime=600 --group_reporting

4. Random Writes – Async mode – 64K block size – Direct IO – 100% Writes

fio --name=randwrite --rw=randwrite --direct=1 --ioengine=libaio --bs=64k --numjobs=8 --size=512m --runtime=600 --group_reporting

5. Random Read/Writes – Async mode – 16K block size – Direct IO – 90% Reads/10% Writes

fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=16k --numjobs=8 --rwmixread=90 --size=1G --runtime=600 --group_reporting

Sample output

The following command creates 8 files (numjobs=8) each with size 512MB (size) at 64K block size (bs=64k) and will perform random read/write (rw=randrw) with the mixed workload of 70% reads and 30% writes. The job will run for full 5 minutes (runtime=300 & time_based) even if the files were created and read/written.

[root@orarac1 fio]# fio --name=randrw --ioengine=libaio --iodepth=1 --rw=randrw 
--bs=64k --direct=1 --size=512m --numjobs=8 --runtime=300 --group_reporting 
--time_based --rwmixread=70

randrw: (g=0): rw=randrw, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=1
 ...
 fio-2.1.10
 Starting 8 processes
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 randrw: Laying out IO file(s) (1 file(s) / 512MB)
 Jobs: 8 (f=8): [mmmmmmmm] [2.0% done] [252.0MB/121.3MB/0KB /s] [4032/1940/0 iops] [eta 04m:55s]

randrw: (groupid=0, jobs=8): err= 0: pid=31900: Mon Jun 13 01:01:08 2016
 read : io=78815MB, bw=269020KB/s, iops=4203, runt=300002msec
 slat (usec): min=6, max=173, avg= 9.99, stdev= 3.63
 clat (usec): min=430, max=23909, avg=1023.31, stdev=273.66
 lat (usec): min=447, max=23917, avg=1033.46, stdev=273.78
 clat percentiles (usec):
 | 1.00th=[ 684], 5.00th=[ 796], 10.00th=[ 836], 20.00th=[ 892],
 | 30.00th=[ 932], 40.00th=[ 964], 50.00th=[ 996], 60.00th=[ 1032],
 | 70.00th=[ 1080], 80.00th=[ 1128], 90.00th=[ 1208], 95.00th=[ 1288],
 | 99.00th=[ 1560], 99.50th=[ 2256], 99.90th=[ 3184], 99.95th=[ 3408],
 | 99.99th=[13888]
 bw (KB /s): min=28288, max=39217, per=12.49%, avg=33596.69, stdev=1709.09
 write: io=33899MB, bw=115709KB/s, iops=1807, runt=300002msec
 slat (usec): min=7, max=140, avg=11.42, stdev= 3.96
 clat (usec): min=1246, max=24744, avg=2004.11, stdev=333.23
 lat (usec): min=1256, max=24753, avg=2015.69, stdev=333.36
 clat percentiles (usec):
 | 1.00th=[ 1576], 5.00th=[ 1688], 10.00th=[ 1752], 20.00th=[ 1816],
 | 30.00th=[ 1880], 40.00th=[ 1928], 50.00th=[ 1976], 60.00th=[ 2040],
 | 70.00th=[ 2096], 80.00th=[ 2160], 90.00th=[ 2256], 95.00th=[ 2352],
 | 99.00th=[ 2576], 99.50th=[ 2736], 99.90th=[ 4256], 99.95th=[ 4832],
 | 99.99th=[16768]
 bw (KB /s): min=11776, max=16896, per=12.53%, avg=14499.30, stdev=907.78
 lat (usec) : 500=0.01%, 750=1.61%, 1000=33.71%
 lat (msec) : 2=50.35%, 4=14.27%, 10=0.04%, 20=0.02%, 50=0.01%
 cpu : usr=0.46%, sys=1.60%, ctx=1804510, majf=0, minf=196
 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 issued : total=r=1261042/w=542389/d=0, short=r=0/w=0/d=0
 latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
 READ: io=78815MB, aggrb=269020KB/s, minb=269020KB/s, maxb=269020KB/s, mint=300002msec, maxt=300002msec
 WRITE: io=33899MB, aggrb=115708KB/s, minb=115708KB/s, maxb=115708KB/s, mint=300002msec, maxt=300002msec
Like it? Share ...Share on twitter
Twitter
Share on linkedin
Linkedin
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Verified by MonsterInsights