20

I have hunch that a certain intermittent bug might only manifest itself when there is a slow disk read rate. Troubleshooting is difficult because I can't reliably reproduce it.

Short of simply gobbling IO with a high priority process, is there any way for me to simulate having a slow hard drive?

ændrük
  • 76,794
  • I remember seeing a command to tell harddrives to run at certain bus speeds. I'll see if I can dig it out. – Jeremy Nov 17 '10 at 01:22
  • man hdparm take a look at the -X option maybe? There are quite a few things in there you could use to slow down your drive, but some of them risk doing nasty things to the data! – Jeremy Nov 17 '10 at 01:27
  • Also, try mounting a network share as a folder (google is your friend), maybe even over wifi, if that is plausible. – Jeremy Nov 17 '10 at 01:27
  • 1
    This isn't a direct answer, but: if I had an intermittent bug like this, I would probably try running the process under Valgrind (if it was in a compiled language), because that would likely capture IO race conditions. – poolie Nov 17 '10 at 21:55
  • 1
    Are you talking about a bug in an application, or the kernel, or a device driver? Or you don't know at all? It might help if you explained more. – poolie Nov 18 '10 at 01:29
  • @poolie It's this problem. It stopped happening on one computer when I installed an SSD. – ændrük Nov 19 '10 at 06:40
  • OK, I think it is highly likely to be what Jacob said on that bug, that gnome-settings-daemon is crashing, probably related to the timing of different components starting up. – poolie Nov 19 '10 at 06:59

10 Answers10

16

Use nbd, the Network Block Device, and then rate limit access to it using say trickle.

sudo apt-get install nbd-client nbd-server trickle
poolie
  • 9,241
  • +1 for a pretty cool solution. however, it's not a real test because you not going to the real hard disk device driver, which where the problem may lie. – The Unix Janitor Nov 18 '10 at 01:01
  • 1
    I didn't think he was talking about a driver bug, but that was just an assumption. Let's see. – poolie Nov 18 '10 at 01:30
  • 20
    Is it possible to add the actual command to this answer? At the moment you only show how to install the tools required :) – Rich Sep 18 '15 at 07:33
5
# echo 1 > /proc/sys/vm/drop_caches

That'll slow you down :)

It'll force you to read from disk, instead of taking advantage of the cached page.

If you really wanted to get sophisticated you could do something like fake a read error every nth time using the scsi fault injection framework.

http://scsifaultinjtst.sourceforge.net/

ppetraki
  • 5,483
4

Have a USB 1.1 hub? Or a slow SD card? They'll get you down to under 10mbps.

Oli
  • 293,335
3

This is by no means a complete solution, but it may help in conjunction with other measures: There is an I/O scheduler much like a process scheduler, and it can be tweaked.

Most notably, you can actually choose amongst different schedulers:

~# cat /sys/block/sda/queue/scheduler 
noop anticipatory deadline [cfq] 
~# echo "deadline" > /sys/block/sda/queue/scheduler
~# cat /sys/block/sda/queue/scheduler 
noop anticipatory [deadline] cfq 
~# 

deadline may help you get more strongly reproducible results.

noop, as its name implies, is insanely dumb, and will enable you to wreck absolute havoc on I/O performance with little effort.

anticipatory and cfq both try to be smart about it, though cfq is generally the smarter of the two. (As I recall, anticipatory is actually the legacy scheduler from right before the kernel started supporting multiple schedulers.)

3

You can use a Virtual Machine and throttle disk access ... here are some tips about how do it in Virtualbox 5.8. Limiting bandwidth for disk images https://www.virtualbox.org/manual/ch05.html#storage-bandwidth-limit

Adi Roiban
  • 2,892
2

Apart from trying to slow down the hard drive itself, you could try using filesystem benchmarking tools such as bonnie++ which can cause a great deal of disk I/O.

sudo apt-get install bonnie++
Zanna
  • 70,465
ajmitch
  • 18,543
2

You could try running a copy of a large file, such as an iso of the Ubuntu install cd, and run it twice. That should slow your drive down quite a bit.

RolandiXor
  • 51,541
1

I have recently figured out a setup where I've

  • moved the directory to my Google Drive
  • mounted it via the super-duper-slow client google-drive-ocamlfuse
  • created a symlink from the original path to the new one

If 16 seconds latency is not slow enough, you can just unplug your router.

For reference, here is the original use case, where I got the idea for this: https://github.com/goavki/apertium-apy/pull/76#issuecomment-355007128

0

Why not run iotop and see if the process that you are trying to debug is causing lots of disk reads/writes?

Eliah Kagan
  • 117,780
  • 3
    I think this answer is seen as unhelpful because the mere fact that the process is doing lots of IO may be already known, or not a problem in itself. The issue is that there is some kind of timing-related bug in the way it handles those IOs. – poolie Nov 17 '10 at 21:54
0

how about make -j64? in articles describing that new 200line performance patch, make -j64 was a task eating a lot of computer resources

Praweł
  • 6,490