0

So I have a server where I have important processes running 24/7, and once per day I run a backup of a specific folder like this:

time tar cf ${HOME}/${SNAP_NAME} -C ${DATA_PATH} . &>>${LOG_PATH}

The problem I have is that, during this backup process, all of my other processes stop running, or have very poor performance because tar is utilizing IO disk writing at max.

My question is if there is a way to somehow limit this specific tar process to use only 30-40% (or any number I set) of available IO? I have tried ionice but unfortunately, this doesn't help as it would still write it at max speeds.

  • https://askubuntu.com/a/1049163/374062 – sleepyhead Apr 28 '23 at 20:02
  • ionice doesn't limit the Io speed, but prioritizes other processes, those may starve due to CPU or maybe memory usage. Limiting all three you may help, but each situation is unique, so success is not garuenteed. Check with htop what consumes most resources – sleepyhead Apr 28 '23 at 20:06
  • Yes, I already mentioned ionice doesn't help, and that's its disk IO that I need to limit, not CPU or RAM. – user1692021 Apr 28 '23 at 20:28
  • 1
    Search this site for "slow copy" -- basically, system buffers fill up because reads are faster than writes, and the whole system grinds to a crawl. Many solutions are offered for special cases, but once the program (tar) passes off the write buffer, that resource is no longer counted against it. noquota will at least limit buffering of the reads, and tar does offer user buffering -- might help a bit. – ubfan1 Apr 28 '23 at 22:17

0 Answers0