Fio – Flexible I/O Tester (Job file)

之前有針對 Fio 做簡單的介紹 http://benjr.tw/34632 ,但他可以像是 Iometer 一樣直接將 # of Outstanding IO 設定成,等比 (Exponential steeping) 1,2,4,8,16.. 或是 等差(Linear Steeping) 1,2,4,6,8.. 的方式進行測試嗎?

答案是可以的,只需要透過 Job file (configuration) 檔案,先編輯好我們要測試的不同參數值.

當設定比較複雜時我們可以透過 Job file 的設定來進行 Fio 的測試,Job file 其實就兩段 [global] : 共用的設定參數 , [job] :特別指定的參數.

如果想要跑不同的 iodepth (類似 Iometer 的 outstanding IO) ,我們就可以自訂 Job ,各別跑不同的 iodepth (1,2,4)

[root@localhost ~]# vi fio01.cfg
[global]
direct=1
refill_buffers=1
ramp_time=5
ioengine=libaio
time_based
runtime=30
filename=/dev/sdb
rw=read
rwmixread=100
bs=64k
[Job1]
iodepth=1
[Job2]
iodepth=2
[Job3]
iodepth=4

參數參考:

  • direct=1
    預設值為 0 ,必須設定為 1 才會測試到真實的 non-buffered I/O
  • refill_buffers=1
    refill_buffers 為預設值,應該是跟 I/O Buffer 有關 (refill the IO buffers on every submit),把 Buffer 填滿就不會跑到 Buffer 的值.
  • ramp_time==5
    設定 ramp_time 會讓測試開始的一段時間(單位為秒)不統計到整體效能裡,避免測試是跑在 cache 裏.
  • ioengine=libaio
    定義如何跑 I/O 的方式, libaio 是 Linux 本身非同步(asynchronous) I/O 的方式.
    其他還有 sync , psync , vsync , posixaio , mmap , splice , syslet-rw , sg , null , net , netsplice , cpuio , guasi , external
  • filename=/dev/sdb
    這邊需要指定要測試的磁碟,當多顆硬碟時你可以直接指定 filename=/dev/sdb:/dev/sdc (用冒號隔開)
  • time_based
    測試以時間為單位,另外一種方式是以 kb_base (kilobyte).
  • runtime=30
    當設定為 time_based 時是以時間為單位(秒),另外一種方式是以 kb_base (kilobyte).
  • rw=read
    可以設定的參數如下:

    • read : Sequential reads. (循序讀)
    • write : Sequential writes. (循序寫)
    • trim : Sequential trim.
    • randread : Random reads. (隨機讀)
    • randwrite : Random writes. (隨機寫)
    • randtrim : Random trim.
    • rw : Mixed sequential reads and writes. (循序讀寫)
    • readwrite : Sequential read and write mix (循序混合讀寫)
    • randrw : Mixed random reads and writes. (隨機讀寫)
    • trimwrite : Trim and write mix, trims preceding writes.
  • rwmixread=100
    當設定為 Mixed ,同一時間 read 的比例為多少,預設為 50% (50% read + 50% write)
  • bs=64k
    bs 或是 blocksize ,也就是檔案寫入大小,預設值為 4K,如何設定這個值,因為不同性質的儲存裝置需要不同的值.看你是 File Server,Web server , Database … 設定都會不一樣.
  • iodepth=16
    同一時間有多少 I/O 在做存取,越多不代表存儲裝置表現會更好,通常是 RAID 時須要設大一點.

執行結果如下.

[root@localhost ~]# fio fio01.cfg 
Job1: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=1
Job2: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=2
Job3: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=4
fio-2.1.10
Starting 3 processes
Jobs: 2 (f=2): [RR_] [54.5% done] [58240KB/0KB/0KB /s] [910/0/0 iops] [eta 00m:30s] 
Job1: (groupid=0, jobs=1): err= 0: pid=17986: Tue May  8 09:34:02 2018
  read : io=62656KB, bw=2088.2KB/s, iops=32, runt= 30006msec
    slat (usec): min=11, max=124, avg=14.38, stdev= 7.76
    clat (msec): min=1, max=2454, avg=30.63, stdev=219.63
     lat (msec): min=1, max=2454, avg=30.65, stdev=219.63
    clat percentiles (usec):
     |  1.00th=[ 1048],  5.00th=[ 1304], 10.00th=[ 1800], 20.00th=[ 2288],
     | 30.00th=[ 2448], 40.00th=[ 2992], 50.00th=[ 6432], 60.00th=[12352],
     | 70.00th=[14400], 80.00th=[16768], 90.00th=[21632], 95.00th=[28800],
     | 99.00th=[419840], 99.50th=[2408448], 99.90th=[2441216], 99.95th=[2441216],
     | 99.99th=[2441216]
    bw (KB  /s): min=   11, max= 6956, per=5.33%, avg=4046.77, stdev=2969.24
    lat (msec) : 2=11.24%, 4=32.89%, 10=9.70%, 20=34.73%, 50=10.42%
    lat (msec) : 500=0.10%, 750=0.10%, >=2000=0.82%
  cpu          : usr=0.02%, sys=0.06%, ctx=1051, majf=0, minf=45
  IO depths    : 1=107.4%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=979/w=0/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1
Job2: (groupid=0, jobs=1): err= 0: pid=17987: Tue May  8 09:34:02 2018
  read : io=233088KB, bw=7769.9KB/s, iops=121, runt= 30002msec
    slat (usec): min=11, max=101, avg=12.82, stdev= 4.44
    clat (usec): min=611, max=2454.7K, avg=17125.90, stdev=165392.97
     lat (usec): min=623, max=2454.8K, avg=17138.93, stdev=165392.97
    clat percentiles (usec):
     |  1.00th=[ 1064],  5.00th=[ 1288], 10.00th=[ 1304], 20.00th=[ 1320],
     | 30.00th=[ 1336], 40.00th=[ 1768], 50.00th=[ 2256], 60.00th=[ 2448],
     | 70.00th=[ 5344], 80.00th=[12352], 90.00th=[16192], 95.00th=[21120],
     | 99.00th=[31616], 99.50th=[42240], 99.90th=[2441216], 99.95th=[2441216],
     | 99.99th=[2441216]
    bw (KB  /s): min=   11, max=83200, per=18.56%, avg=14093.54, stdev=18046.35
    lat (usec) : 750=0.33%, 1000=0.16%
    lat (msec) : 2=46.31%, 4=21.40%, 10=6.67%, 20=19.64%, 50=5.03%
    lat (msec) : 100=0.03%, >=2000=0.47%
  cpu          : usr=0.04%, sys=0.23%, ctx=3799, majf=0, minf=64
  IO depths    : 1=0.1%, 2=103.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=3641/w=0/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=2
Job3: (groupid=0, jobs=1): err= 0: pid=17988: Tue May  8 09:34:02 2018
  read : io=1936.5MB, bw=66079KB/s, iops=1032, runt= 30008msec
    slat (usec): min=10, max=298, avg=11.90, stdev= 2.51
    clat (usec): min=911, max=729675, avg=3860.36, stdev=8966.29
     lat (usec): min=924, max=729687, avg=3872.48, stdev=8966.31
    clat percentiles (usec):
     |  1.00th=[ 1800],  5.00th=[ 2448], 10.00th=[ 2608], 20.00th=[ 2608],
     | 30.00th=[ 2608], 40.00th=[ 2608], 50.00th=[ 2640], 60.00th=[ 3120],
     | 70.00th=[ 3248], 80.00th=[ 3280], 90.00th=[ 4128], 95.00th=[10048],
     | 99.00th=[22912], 99.50th=[27264], 99.90th=[33024], 99.95th=[35584],
     | 99.99th=[724992]
    bw (KB  /s): min=   12, max=88064, per=87.68%, avg=66580.42, stdev=27781.05
    lat (usec) : 1000=0.03%
    lat (msec) : 2=1.32%, 4=87.94%, 10=5.71%, 20=3.41%, 50=1.58%
    lat (msec) : 750=0.01%
  cpu          : usr=0.38%, sys=2.07%, ctx=37650, majf=0, minf=94
  IO depths    : 1=0.1%, 2=0.1%, 4=119.9%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=30980/w=0/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: io=2225.3MB, aggrb=75934KB/s, minb=2088KB/s, maxb=66079KB/s, mint=30002msec, maxt=30008msec

Disk stats (read/write):
  sdb: ios=41819/0, merge=0/0, ticks=245107/0, in_queue=245386, util=99.77%

跑的結果重點如下:

Run status group 0 (all jobs):
   READ: io=2225.3MB, aggrb=75934KB/s, minb=2088KB/s, maxb=66079KB/s, mint=30002msec, maxt=30008msec

這是 3 個 JOB 統計總和的結果 [Job1] iodepth=1, [Job2] iodepth=2, [Job3] iodepth=4 所得到的結果,io=2225.3MB, aggrb=75934KB/s

要看個別的結果,需要往前看.

這是 [Job1] iodepth=1 所得到的結果,io=62656KB, bw=2088.2KB/s

Job1: (groupid=0, jobs=1): err= 0: pid=17986: Tue May  8 09:34:02 2018
  read : io=62656KB, bw=2088.2KB/s, iops=32, runt= 30006msec

這是 [Job2] iodepth=2 所得到的結果,io=233088KB, bw=7769.9KB/s

Job2: (groupid=0, jobs=1): err= 0: pid=17987: Tue May  8 09:34:02 2018
  read : io=233088KB, bw=7769.9KB/s, iops=121, runt= 30002msec

這是 [Job3] iodepth=4 所得到的結果,read : io=1936.5MB, bw=66079KB/s

Job3: (groupid=0, jobs=1): err= 0: pid=17988: Tue May  8 09:34:02 2018
  read : io=1936.5MB, bw=66079KB/s, iops=1032, runt= 30008msec

在同一個 Job file 的資料統計都算在同一個 group ,可以使用 new_group 可以讓每一個 Job 的統計都是個別的 Group .

  • new_group
    讓每一個 Job 的統計都是個別的 Group.
[root@localhost ~]# vi fio01.cfg
[global]
direct=1
refill_buffers=1
ramp_time=5
ioengine=libaio
time_based
runtime=30
filename=/dev/sdb
rw=read
rwmixread=100
bs=64k
new_group
[Job1]
iodepth=1
[Job2]
iodepth=2
[Job3]
iodepth=4
[root@localhost ~]#  fio fio01.cfg 
Job1: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=1
Job2: (g=1): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=2
Job3: (g=2): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=4
fio-2.1.10
Starting 3 processes
Jobs: 2 (f=2): [RR_] [54.5% done] [84928KB/0KB/0KB /s] [1327/0/0 iops] [eta 00m:30s]
Job1: (groupid=0, jobs=1): err= 0: pid=18049: Tue May  8 09:35:55 2018
  read : io=42688KB, bw=1422.7KB/s, iops=22, runt= 30007msec
    slat (usec): min=12, max=118, avg=13.68, stdev= 6.15
    clat (usec): min=952, max=2449.2K, avg=44971.76, stdev=283577.31
     lat (usec): min=965, max=2449.2K, avg=44985.66, stdev=283577.54
    clat percentiles (usec):
     |  1.00th=[ 1048],  5.00th=[ 1400], 10.00th=[ 1848], 20.00th=[ 2320],
     | 30.00th=[ 2480], 40.00th=[ 3408], 50.00th=[ 7456], 60.00th=[12608],
     | 70.00th=[14528], 80.00th=[17024], 90.00th=[22656], 95.00th=[28544],
     | 99.00th=[2408448], 99.50th=[2441216], 99.90th=[2441216], 99.95th=[2441216],
     | 99.99th=[2441216]
    bw (KB  /s): min=   11, max= 6474, per=100.00%, avg=2825.84, stdev=2743.59
    lat (usec) : 1000=0.15%
    lat (msec) : 2=10.19%, 4=33.58%, 10=8.70%, 20=34.18%, 50=11.39%
    lat (msec) : 250=0.15%, 500=0.15%, 2000=0.15%, >=2000=1.35%
  cpu          : usr=0.02%, sys=0.04%, ctx=715, majf=0, minf=47
  IO depths    : 1=106.9%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=667/w=0/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1
Job2: (groupid=1, jobs=1): err= 0: pid=18050: Tue May  8 09:35:55 2018
  read : io=117120KB, bw=3903.9KB/s, iops=60, runt= 30001msec
    slat (usec): min=11, max=92, avg=12.66, stdev= 4.84
    clat (usec): min=671, max=2462.4K, avg=34108.70, stdev=249446.62
     lat (usec): min=695, max=2462.4K, avg=34121.58, stdev=249446.80
    clat percentiles (usec):
     |  1.00th=[ 1032],  5.00th=[ 1304], 10.00th=[ 1320], 20.00th=[ 1400],
     | 30.00th=[ 2040], 40.00th=[ 2320], 50.00th=[ 2512], 60.00th=[ 5408],
     | 70.00th=[11328], 80.00th=[14272], 90.00th=[19072], 95.00th=[27520],
     | 99.00th=[2408448], 99.50th=[2441216], 99.90th=[2473984], 99.95th=[2473984],
     | 99.99th=[2473984]
    bw (KB  /s): min=   11, max=14236, per=100.00%, avg=5877.06, stdev=5802.98
    lat (usec) : 750=0.55%, 1000=0.27%
    lat (msec) : 2=29.03%, 4=27.77%, 10=8.69%, 20=24.55%, 50=7.93%
    lat (msec) : 250=0.11%, 2000=0.11%, >=2000=1.04%
  cpu          : usr=0.04%, sys=0.10%, ctx=1946, majf=0, minf=62
  IO depths    : 1=0.1%, 2=105.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=1829/w=0/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=2
Job3: (groupid=2, jobs=1): err= 0: pid=18051: Tue May  8 09:35:55 2018
  read : io=2171.9MB, bw=74126KB/s, iops=1158, runt= 30002msec
    slat (usec): min=10, max=368, avg=12.29, stdev= 2.95
    clat (usec): min=910, max=2474.2K, avg=3439.42, stdev=13579.90
     lat (usec): min=924, max=2474.2K, avg=3451.95, stdev=13579.90
    clat percentiles (usec):
     |  1.00th=[ 1944],  5.00th=[ 1960], 10.00th=[ 2384], 20.00th=[ 2608],
     | 30.00th=[ 2608], 40.00th=[ 2608], 50.00th=[ 2608], 60.00th=[ 2832],
     | 70.00th=[ 3248], 80.00th=[ 3280], 90.00th=[ 3600], 95.00th=[ 4640],
     | 99.00th=[19840], 99.50th=[25728], 99.90th=[33024], 99.95th=[36096],
     | 99.99th=[38656]
    bw (KB  /s): min=   12, max=87936, per=98.28%, avg=72850.80, stdev=22902.67
    lat (usec) : 1000=0.02%
    lat (msec) : 2=7.17%, 4=85.36%, 10=4.43%, 20=2.04%, 50=0.99%
    lat (msec) : >=2000=0.01%
  cpu          : usr=0.50%, sys=2.28%, ctx=41629, majf=0, minf=94
  IO depths    : 1=0.1%, 2=0.1%, 4=118.2%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=34746/w=0/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: io=42688KB, aggrb=1422KB/s, minb=1422KB/s, maxb=1422KB/s, mint=30007msec, maxt=30007msec

Run status group 1 (all jobs):
   READ: io=117120KB, aggrb=3903KB/s, minb=3903KB/s, maxb=3903KB/s, mint=30001msec, maxt=30001msec

Run status group 2 (all jobs):
   READ: io=2171.9MB, aggrb=74126KB/s, minb=74126KB/s, maxb=74126KB/s, mint=30002msec, maxt=30002msec

Disk stats (read/write):
  sdb: ios=43395/0, merge=0/0, ticks=244671/0, in_queue=244710, util=99.74%

剛剛的跑法都是所有的 Job 同時跑出來的結果,如果我們希望第一個 JOB 跑完再跑下一個,這時可以使用另外一個參數 wait_for_previous 或是 stonewall 就可以解決這問題.

  • stonewall , wait_for_previous
    會等待前一個 Job 跑完再執行下一個 Job. stonewall implies new_group.
[root@localhost ~]# vi fio01.cfg
[global]
direct=1
refill_buffers=1
ramp_time=5
ioengine=libaio
time_based
runtime=30
filename=/dev/sdb
rw=read
rwmixread=100
bs=64k
new_group
wait_for_previous
[Job1]
iodepth=1
[Job2]
iodepth=2
[Job3]
iodepth=4
[root@localhost ~]# Job1: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=1
Job2: (g=1): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=2
Job3: (g=2): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=4
fio-2.1.10
Starting 3 processes
Jobs: 1 (f=1): [__R] [63.9% done] [87680KB/0KB/0KB /s] [1370/0/0 iops] [eta 01m:00s]
Job1: (groupid=0, jobs=1): err= 0: pid=18085: Tue May  8 09:39:24 2018
  read : io=2553.1MB, bw=87171KB/s, iops=1362, runt= 30001msec
    slat (usec): min=9, max=1702, avg=12.81, stdev= 8.96
    clat (usec): min=7, max=18598, avg=719.40, stdev=312.09
     lat (usec): min=298, max=18612, avg=732.41, stdev=312.09
    clat percentiles (usec):
     |  1.00th=[  612],  5.00th=[  628], 10.00th=[  628], 20.00th=[  636],
     | 30.00th=[  644], 40.00th=[  644], 50.00th=[  644], 60.00th=[  644],
     | 70.00th=[  644], 80.00th=[  652], 90.00th=[ 1240], 95.00th=[ 1304],
     | 99.00th=[ 1320], 99.50th=[ 1336], 99.90th=[ 3312], 99.95th=[ 6880],
     | 99.99th=[12608]
    bw (KB  /s): min=   12, max=87936, per=98.60%, avg=85950.07, stdev=11294.30
    lat (usec) : 10=0.01%, 500=0.59%, 750=88.35%, 1000=0.09%
    lat (msec) : 2=10.75%, 4=0.13%, 10=0.08%, 20=0.01%
  cpu          : usr=0.65%, sys=2.64%, ctx=48463, majf=0, minf=44
  IO depths    : 1=116.7%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=40863/w=0/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1
Job2: (groupid=1, jobs=1): err= 0: pid=18086: Tue May  8 09:39:24 2018
  read : io=2559.8MB, bw=87367KB/s, iops=1365, runt= 30002msec
    slat (usec): min=11, max=342, avg=12.41, stdev= 2.75
    clat (usec): min=306, max=7986, avg=1450.74, stdev=383.93
     lat (usec): min=351, max=7999, avg=1463.39, stdev=383.88
    clat percentiles (usec):
     |  1.00th=[  876],  5.00th=[ 1288], 10.00th=[ 1288], 20.00th=[ 1288],
     | 30.00th=[ 1288], 40.00th=[ 1288], 50.00th=[ 1304], 60.00th=[ 1304],
     | 70.00th=[ 1304], 80.00th=[ 1944], 90.00th=[ 1960], 95.00th=[ 1976],
     | 99.00th=[ 2384], 99.50th=[ 2384], 99.90th=[ 7520], 99.95th=[ 7520],
     | 99.99th=[ 7968]
    bw (KB  /s): min=   12, max=87936, per=98.40%, avg=85969.05, stdev=11295.20
    lat (usec) : 500=0.01%, 750=0.01%, 1000=2.92%
    lat (msec) : 2=93.45%, 4=3.51%, 10=0.11%
  cpu          : usr=0.61%, sys=2.57%, ctx=48533, majf=0, minf=61
  IO depths    : 1=0.1%, 2=116.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=40955/w=0/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=2
Job3: (groupid=2, jobs=1): err= 0: pid=18095: Tue May  8 09:39:24 2018
  read : io=2559.9MB, bw=87368KB/s, iops=1365, runt= 30003msec
    slat (usec): min=11, max=268, avg=12.48, stdev= 2.10
    clat (usec): min=2102, max=9707, avg=2915.84, stdev=481.15
     lat (usec): min=2114, max=9720, avg=2928.54, stdev=481.14
    clat percentiles (usec):
     |  1.00th=[ 2384],  5.00th=[ 2608], 10.00th=[ 2608], 20.00th=[ 2608],
     | 30.00th=[ 2608], 40.00th=[ 2608], 50.00th=[ 2608], 60.00th=[ 2896],
     | 70.00th=[ 3280], 80.00th=[ 3280], 90.00th=[ 3280], 95.00th=[ 3664],
     | 99.00th=[ 4128], 99.50th=[ 4128], 99.90th=[ 8896], 99.95th=[ 8896],
     | 99.99th=[ 9664]
    bw (KB  /s): min=   12, max=87936, per=98.41%, avg=85976.63, stdev=11296.04
    lat (msec) : 4=96.89%, 10=3.12%
  cpu          : usr=0.55%, sys=2.63%, ctx=48440, majf=0, minf=92
  IO depths    : 1=0.1%, 2=0.1%, 4=116.7%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=40955/w=0/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: io=2553.1MB, aggrb=87171KB/s, minb=87171KB/s, maxb=87171KB/s, mint=30001msec, maxt=30001msec

Run status group 1 (all jobs):
   READ: io=2559.8MB, aggrb=87366KB/s, minb=87366KB/s, maxb=87366KB/s, mint=30002msec, maxt=30002msec

Run status group 2 (all jobs):
   READ: io=2559.9MB, aggrb=87368KB/s, minb=87368KB/s, maxb=87368KB/s, mint=30003msec, maxt=30003msec

Disk stats (read/write):
  sdb: ios=143238/0, merge=0/0, ticks=242006/0, in_queue=241932, util=98.48%

2 thoughts on “Fio – Flexible I/O Tester (Job file)

發表迴響

你的電子郵件位址並不會被公開。 必要欄位標記為 *

這個網站採用 Akismet 服務減少垃圾留言。進一步瞭解 Akismet 如何處理網站訪客的留言資料