0

I've got a software RAID5 array consisting of four 8TB disks assembled using mdadm. When I try to measure write performance using fio with the command

fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting

taken from this answer (How to check hard disk performance) I am happy to see the results line

WRITE: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=10.0GiB (10.7GB), run=54942-54942msec

However, if I use the Gnome Disks program I see an average write rate over 100 samples of merely 1.8 MB/s! (I used default settings: 10 MiB sample size.) Also Gnome Disks reports an average write rate of 76.2 MB/s over the same benchmark, considerably lower than what fio reports using a 10MB blocksize (~300 MB/s).

I know that these tools don't give results which are directly comparable, but something has to be amiss with Gnome Disks, considering that I don't see the abysmal write performance in practice. When copying a ~5GB file in nautilus I do see very fast copying of the first ~1.3GB (presumably going straight into a cache), then a stutter for a few seconds where it looks like no progress is being made, and then a very fast jump to completion (but I'm willing to assume that this is typical of the configuration).

Have I made a common mistake---what could possibly be the problem with Gnome Disk's measurement of the write speeds?

1 Answers1

0

Looking at https://github.com/GNOME/gnome-disk-utility/blob/5baa52eff3036fc59648bc2e10c4d4ec69dec50b/src/disks/gdubenchmarkdialog.c#L1337 it doesn't look like gnome-disks is submitting I/O O_DIRECTly or asynchronously (whereas your fio job does). Further, gnome-disks seems to be doing I/O against a block device (whereas your fio job is using a file in a filesystem) and is also choosing to do an fsync after EVERY write I/O (https://github.com/GNOME/gnome-disk-utility/blob/5baa52eff3036fc59648bc2e10c4d4ec69dec50b/src/disks/gdubenchmarkdialog.c#L1358 ). It also seems to be doing a read and followed by a write of what it read. I'd expect your fio job to thoroughly blow it away - being able to have 32 outstanding commands in flight is quite a bit of an advantage!

Gnome-disks seems to be doing something more akin to an fio job like:

# The following is dangerous and will IRRECOVERABLY DESTROY DATA
name=pseudo-gnome-disks-dangerous
filename=/dev/<dev>
rw=write
bs=1M
fsync=1
size=500M

but even the above doesn't capture the reads that gnome-disks is doing (since gnome-disks does a read and then a write of the same block). Also in case it isn't obvious the above fio job is DANGEROUS AND WILL IRRECOVERABLY DATA so don't run it unless you can afford the data on the chosen device to be IRRECOVERABLY DESTROYED.

Have I made a common mistake---what could possibly be the problem with Gnome Disk's measurement of the write speeds?

I'm afraid you answered this yourself:

I know that these tools don't give results which are directly comparable

The only mistake was comparing different benchmarking tools against each other anyway rather than only against themselves. If you don't know they're doing the same thing it's not going to be fair.

TLDR; gnome-disks is not doing the same thing as your original fio job so you're doing an apples to oranges comparison.

Anon
  • 312