73

It happens pretty often to me when I am compiling software in the background and suddenly everything starts to slow down and eventually freeze up [if I do nothing], as I have run out of both RAM and swap space.

This question assumes that I have enough time and resources to open up Gnome Terminal, search through my history, and execute one sudo command.

What command can save me from having to do a hard reboot, or any reboot at all?

Yaron
  • 13,173
Anon
  • 12,063
  • 23
  • 70
  • 124
  • Comments are not for extended discussion; this conversation has been moved to chat. – Thomas Ward Jul 05 '17 at 17:22
  • 1
    If you run out of swap space, I think you have too little of it. I got 20G of swap space on this computer. The point is for it to give you enough time with a usable system to kill whatever is eating up your memory. It's not something where you take only what you'll use, but what you hope you'll never use. – JoL Jul 08 '17 at 00:11
  • 1
    Are you sure both RAM and swap are being filled? If that were the case, the OOM handler would kill your compiler and free up memory (and also screw up your build process). Otherwise, I'd think it's just getting filled up, and maybe your system is slow because your swap is on your system disk. – sudo Jul 08 '17 at 03:01
  • 3
    Try reducing the number of your parallel builds if you don't have enough RAM to support it. If your build starts swapping, you will be way slower. With make, try -j4 for example for 4 parallel builds at a time. – Shahbaz Jul 08 '17 at 03:07
  • 1
    "Alexa order me 8 gigs of ram" –  Jul 10 '17 at 08:57

12 Answers12

84

In my experience Firefox and Chrome use more RAM than my first 7 computers combined. Probably more than that but I'm getting away from my point. The very first thing you should do is close your browser. A command?

killall -9 firefox google-chrome google-chrome-stable chromium-browser

I've tied the most popular browsers together into one command there but obviously if you're running something else (or know you aren't using one of these) just modify the command. The killall -9 ... is the important bit. People do get iffy about SIGKILL (signal number 9) but browsers are extremely resilient. More than that, terminating slowly via SIGTERM will mean the browser does a load of cleanup rubbish —which requires a burst of additional RAM— and that's something you can't afford in this situation.

If you can't get that into an already-running terminal or an Alt+F2 dialogue, consider switching to a TTY. Control + Alt + F2 will get you to TTY2 which should allow you to login (though it might be slow) and should even let you use something like htop to debug the issue. I don't think I've ever run out of RAM to the point I couldn't get htop up.

The long term solution involves either buying more RAM, renting it via a remote computer, or not doing what you're currently doing. I'll leave the intricate economic arguments up to you but generally speaking, RAM is cheap to buy, but if you only need a burst amount, a VPS server billed per minute, or hour is a fine choice.

Oli
  • 293,335
  • Comments are not for extended discussion; this conversation has been moved to chat. – Thomas Ward Jul 05 '17 at 17:23
  • I got a couple of commands linked to my own lazygit command that I use from time to time, maybe something like that could be applied here? The whole killall ... script could be reduced to a simple emptyram or something like that – Francisco Presencia Jul 05 '17 at 23:40
  • You don't need to run the full command if you know what browser is running and I'd assume most people who can identify a RAM shortage do. By extension, I'd find it harder to remember that I'd written an emptyram script than just punching in killall -9 firefox. – Oli Jul 06 '17 at 16:41
  • 2
    Buying RAM... why not just download more RAM? – Stephan Bijzitter Jul 07 '17 at 09:48
  • 1
    Well you might joke but if you need to do something for a short time that needs far more RAM and CPU that you have, renting a VPS by the minute is pretty economical for one-shots. – Oli Jul 07 '17 at 10:15
67

On a system with the Magic System Request Key enabled, pressing Alt + System Request + f (if not marked on your keyboard, System Request is often on the Print Screen key) will manually invoke the kernel's out of memory killer (oomkiller), which tries to pick the worst offending process for memory usage and kill it. You can do this if you have perhaps less time than you've described and the system is just about to start (or maybe has already started) thrashing - in which case you probably don't care exactly what gets killed, just that you end up with a usable system. Sometimes this can end up killing X, but most of the time these days it's a lot better at picking a bad process than it used to be.

Muzer
  • 818
  • 2
    This is a very bad idea if you memory is running out because you're compiling very complex stuff. There is a very non-trivial chance to kill your compiler and lose all your progress up to now, which in a very large project could be a big deal. – T. Sar Jul 03 '17 at 11:19
  • 5
    @T.Sar if you're going straight into thrashing, you already either lose or get a chance of killing memory-eater. You don't gain anything if you just refrain from acting. – Ruslan Jul 03 '17 at 11:26
  • 4
    @Muzer this will only work when you have set kernel.sysrq to 1 or a number including the correct bit in your /etc/sysctl.d/10-magic-sysrq.conf. – Ruslan Jul 03 '17 at 11:29
  • 1
    @Ruslan I'm not saying to refrain from acting, just that this specific command can cause some undesirable loss of progress, and maybe another option could be a better choice. In Windows 7, inserting a flashdrive with TurboBoost configured on it could very save you from a OOM issue, for example, by giving the System more memory to work with. – T. Sar Jul 03 '17 at 11:30
  • 9
    @T.Sar You're not going to lose your progress if you're using a sane build system. You'll retain all the object files but the one you were actually compiling, then you'll get to go back to pretty much where you left off. – Muzer Jul 03 '17 at 12:37
  • 2
    @T.Sar I dunno, if you're doing a parallel build and the files have reasonably complex interdependencies, I can see it consuming a fair amount of memory. That plus the usual memory hog email client and web browser with more than a few tabs open, and I can see it really pushing weaker systems. – Muzer Jul 03 '17 at 12:45
  • @Muzer The thing is - for a compiler, Memory in use is work in progress. If the compiler ever needs to load that much stuff in the first place, to the point it's not cleaning up and just piling up stuff forever you certainly isn't building something sane. Keep in mind that Linux itself - which is a extremely huge and complex system - can be pretty much compiled by any development machine nowadays. I have very big doubts that the OP is compiling something more complex than Linux itself on a low end machine. – T. Sar Jul 03 '17 at 12:49
  • 3
    @T.Sar Just because the thing you're compiling isn't sane doesn't mean the build system isn't sane. Build systems since time immemorial have stored object files for re-use in subsequent compilations. On the other hand, I can certainly name plenty of software projects with less sanity than Linux (which is generally pretty well-designed). For example, compiling something like Firefox or OpenOffice with 8 parallel build threads, I can easily see it taking in the order of gigabytes of RAM. There are also plenty of monolithic corporate systems that depend on hundreds of libraries. – Muzer Jul 03 '17 at 12:55
  • 7
    @T.Sar Linux isn't really complex from the compiler's POV. Actually there are hardly any C programs which are. What about C++? Have you ever tried building a program using Eigen or Boost? You'd be surprised how much memory the compiler sometimes eats with such programs — and they don't have to be complex themselves. – Ruslan Jul 03 '17 at 13:00
  • @T.Sar Do you mean Linux (the kernel that is used in some builds) or do you mean the GNU coreutils (bash, [, man etc.) or do you mean the GUI (X server, probably openbox, something else like LXDE) or do you mean the application software (stuff you get from apt or whatever package manager you use)? Some are more complex than others. – wizzwizz4 Jul 03 '17 at 17:49
  • 2
    @T.Sar "In Windows 7, inserting a flashdrive with TurboBoost configured on it could very save you from a OOM issue" ... I think you mean ReadyBoost, not TurboBoost (TurboBoost is a CPU frequency adaptation technology). ReadyBoost won't help in an OOM situation -- it provides additional disk cache, not additional virtual memory. – Jules Jul 04 '17 at 08:34
  • 2
    @T.Sar Linux itself is only 2 to 10 MB of compiled code, it's hardly a complex piece of software by today's standards. – Dmitry Grigoryev Jul 04 '17 at 13:27
  • 2
    @wizzwizz4: Well, that's kind of the point of *nix, that pretty much everything that is the "system" is smallish independent pieces. Also complexity & memory use of software really isn't all that closely related to complexity of compilation. I've worked on parallel apps that can use hundreds of GBytes and run for days doing some fairly complex calculations, yet they compile in a few minutes without overloading memory on a 2 GB laptop. – jamesqf Jul 05 '17 at 04:53
  • @Muzer: Regarding losing progress: I don't know what you call a "sane build system", and I haven't tried this on Linux, but e.g. canceling builds in Visual Studio has frequently given me unusable object files that I had to manually delete (since they were half-baked). It's not all-or-nothing-per-object-file necessarily, unless your compiler does it that way. – user541686 Jul 07 '17 at 07:31
  • @Mehrdad Never experienced that myself, but I've not used Visual Studio. GCC and Clang tend not to output the object file with its final filename until it's completely done - before that I guess it's saved as a temporary file or something. – Muzer Jul 07 '17 at 08:39
20

Contrary to other answers, I suggest that you disable swap while you are doing this. While swap keeps your system running in a predictable manner, and is often used to increase the throughput of applications accessing the disk (by evicting unused pages to allow room for the disk cache), in this case it sounds like your system is being slowed down to unusable levels because too much actively used memory is being forcibly evicted to swap.

I would recommend disabling swap altogether while doing this task, so that the out-of-memory killer will act as soon as the RAM fills up.

Alternative solutions:

  • Increase the read speed of swap by putting your swap partition in RAID1
    • Or RAID0 if you're feeling risky but that will bring down a large number of running programs if any of your disks malfunction.
  • Decrease the number of concurrent build jobs ("more cores = more speed", we all say, forgetting that it takes a linear toll on RAM)
  • This could go both ways, but try enabling zswap in the kernel. This compresses pages before they are sent to swap, which may provide just enough wiggle room to speed your machine up. On the other hand, it could just end up being a hindrance with the extra compression/decompression it does.
  • Turn down optimisations or use a different compiler. Optimising code can sometimes take up several gigabytes of memory. If you have LTO turned on, you're going to use a lot of RAM at the link stage too. If all else fails, you can try compiling your project with a lighter-weight compiler (e.g. tcc), at the expense of a slight runtime performance hit to the compiled product. (This is usually acceptable if you're doing this for development/debugging purposes.)
  • While I am doing what? – Anon Jul 03 '17 at 17:58
  • While you are compiling your project, or if you compile frequently, maybe while you are developing in general. – Score_Under Jul 03 '17 at 18:02
  • out-of-memory killer will act as soon as the RAM fills up. this has never happened to me, ever. I have left computers run over night, and they are as frozen the next day as when I left them hours prior. Depends on the application maybe? – Anon Jul 03 '17 at 18:05
  • 6
    If you have swap turned off, that is Linux's behaviour when you run out of memory. If Linux does not invoke the out-of-memory killer but freezes instead, that might signify that there are deeper problems with the setup. Of course, if swap is turned on, the behaviour is slightly different. – Score_Under Jul 03 '17 at 20:09
  • 10
    @Akiva Have you ever tried without swap? This answer is spot-on. I’d like to add that running sudo swapoff -a may save you when you are already in a bind: it will immediately stop any additional use of swap space, i.e. the OOM killer should be invoked in the next instant and bring the machine into working order. sudo swapoff -a is also an excellent precautionary measure when debugging memory leaks or compiling, say, firefox. Normally, swap is a bit useful (e.g. for hibernation or swapping out really uneeded stuff), but when you’re actually using memory, the freezes are worse. – Jonas Schäfer Jul 04 '17 at 07:22
  • @JonasWielicki: That's fantastic. I'd assumed that swapoff would refuse to work, or just trigger more thrashing as the system tried to page in whatever it could (and evict read-only pages backed by files) when a runaway process is evicting everyone else's pages. I hadn't thought of it triggering the OOM killer on the next demand for more pages. – Peter Cordes Jul 04 '17 at 07:53
  • 2
    @Score_Under: Separate swap partitions on each disk is supposed to be significantly more efficient than swap on an md raid0 device. I forget where I read that. The Linux RAID wiki recommends separate partitions over raid0, but doesn't say anything very strong about why it's better. Anyway yes, RAID1 or RAID10n2 makes sense for swap, especially if you mostly just want to be able to swap out some dirty but very cold pages to leave more RAM for the pagecache. i.e. swap performance isn't a big deal. – Peter Cordes Jul 04 '17 at 07:57
  • -1. On a computer with limited RAM, disabling swap during a compilation is one sure way to crash it. – Dmitry Grigoryev Jul 04 '17 at 12:44
  • @DmitryGrigoryev Yes, programs will exit without warning - because of Linux's OOM killer - but this is far preferable to the system locking up without recourse. – Score_Under Jul 04 '17 at 15:33
  • 2
    My point is that following your advice, one may not be able to run those programs at all, because they need swap. A build that fails 100% of the time is worse than a build which has 50% chance to lock up the system, isn't it? – Dmitry Grigoryev Jul 04 '17 at 17:17
  • 1
    @Dmitry But the cause of the failure is fairly obvious - you just caused it - and you can make an informed decision at that point to turn it back on (or not). – Riking Jul 04 '17 at 22:50
  • 2
    Without swap, on many machines it is impossible to compile large chunks of code. Why would you assume that it's the compiling he wants to sacrifice? – David Schwartz Jul 04 '17 at 23:15
  • 2
    @DavidSchwartz Sometimes one is caught by surprise that a process requires that high amount of memory. Once that is known (and it is good to find out in a sane way, i.e. crashing the compilation and not locking up the computer entirely, possibly losing valuable data in other processes this way), it is possible to free up more memory, e.g. by closing browsers, mail clients and other non-compiler-related software for the duration of the compilation process and in a controlled manner. With swap and bad I/O scheduling, all you get is a freeze you’re unlikely to recover from. – Jonas Schäfer Jul 09 '17 at 15:43
14

You can use the following command (repeatedly if needed) to kill the process using the most RAM on your system:

ps -eo pid --no-headers --sort=-%mem | head -1 | xargs kill -9

With:

  • ps -eo pid --no-headers --sort=-%mem: display the process ids of all running processes, sorted by memory usage
  • head -1: only keep the first line (process using the most memory)
  • xargs kill -9: kill the process

Edit after Dmitry's accurate comment:

This is a quick and dirty solution that should be executed when there are no sensitive tasks running (tasks that you don't want to kill -9).

Gohu
  • 386
  • 6
    This is much worse than letting the OOM killer handle the situation. The OOM killer is much smarter than that. Do you really run such commands on a computer with ongoing compilations? – Dmitry Grigoryev Jul 04 '17 at 13:08
  • @DmitryGrigoryev it's so smart to sometimes kill Xorg on my desktop. In modern kernels OOMK seems to have gained some sanity, but I wouldn't really trust it after all that. – Ruslan Jul 08 '17 at 19:17
11

Before running your resource consuming commands, you could also use the setrlimit(2) system call, probably with the ulimit builtin of your bash shell (or the limit builtin in zsh) notably with -v for RLIMIT_AS. Then too big virtual address space consumption (e.g. with mmap(2) or sbrk(2) used by malloc(3)) will fail (with errno(3) being ENOMEM).

Then they (i.e. the hungry processes in your shell, after you typed ulimit) would be terminated before freezing your system.

Read also Linux Ate My RAM and consider disabling memory overcommitment (by running the command echo 0 > /proc/sys/vm/overcommit_memory as root, see proc(5)...).

11

this happens pretty often to me when I am compiling software in the background

In that case, something like "killall -9 make" (or whatever you are using to manage your compilation, if not make). This will stop the compilation proceeding further, will SIGHUP all the compiler processes launched from it (hopefully causing them to stop as well) and, as a bonus, doesn't need sudo assuming you're compiling as the same user you're logged in as. And since it kills the actual cause of your problem instead of your web browser, X session or some process at random, it won't interfere with whatever else you were doing on the system at the time.

  • 2
    'tis just a shame that I had to scroll down so far to find this answer. I was hoping someone would propose a way that would suspend progress on this RAM eater. – TOOGAM Jul 06 '17 at 02:46
  • It is nothing near the answer that OP expects, but it answers the question literary: my crap machine is rendered unusable when I build on it - stop building on crap machine. – 9ilsdx 9rvj 0lo Jul 07 '17 at 14:00
9

Create some more swap for yourself.

The following will add 8G of swap:

dd if=/dev/zero of=/root/moreswap bs=1M count=8192
mkswap /root/moreswap
swapon /root/moreswap

It will still be slow (you are swapping) but you shouldn't actually run out. Modern versions of Linux can swap to files. About the only use for a swap partition these days is for hibernating your laptop.

Eliah Kagan
  • 117,780
  • 1
    I've this method implemented as a script, actually, here. Quite useful for adding swap on the fly. – Sergiy Kolodyazhnyy Jul 03 '17 at 15:17
  • Note, making a swap file only works for some filesystems. BTRFS for example does not support a swap file, while Ext4 does. – Anon Jul 03 '17 at 15:49
  • 7
    Some swap is generally wise, but allocating large amounts simply lets the machine thrash more before OOM killer steps in and picks a volunteer. The hoary old role of thumb about "double your ram as swap" is long dead. Personally I see no value in allocating more than ~1 GB swap total. – Criggie Jul 04 '17 at 01:39
  • 5
    With ext4, you can fallocate -l 8G /root/moreswap instead of dd to avoid ever needing to do 8GB of I/O while the system is thrashing. This doesn't work with any other filesystem, though. Definitely not XFS, where swapon sees unwritten extents as holes. (I guess this xfs mailing list discussion didn't pan out). See also swapd, a daemon which creates/removes swap files on the fly to save disk space. Also https://askubuntu.com/questions/905668/does-ubuntu-support-dynamic-swap-file-sizing – Peter Cordes Jul 04 '17 at 07:47
  • But on modern desktops with reasonable amounts of RAM and disk space, that's probably not useful. It just makes it slower to recover if a buggy program is going berserk allocating+using memory. – Peter Cordes Jul 04 '17 at 07:49
  • @Criggie,@Peter Cordes the question is presented as an immediate problem. Adding swap will allow more things to fit inside virtual memory at the cost of speed. Consuming lots of memory doesn't necessarily mean the program is going berserk just that it needs more memory than you have. – William Hay Jul 04 '17 at 12:34
  • 1
    @Criggie " Personally I see no value in allocating more than ~1 GB swap total" - Have you tried to build Firefox? – Dmitry Grigoryev Jul 04 '17 at 13:03
  • @DmitryGrigoryev Really? Is Firefox actually that hefty of a build? – Anon Jul 04 '17 at 13:13
  • 1
    @Akiva Last time I have checked, the recommended build configuration was 16 GB of RAM. The main executable file (xul.dll) is around 50 MB, so it's about 10 times heavier than Linux kernel. – Dmitry Grigoryev Jul 04 '17 at 13:40
  • 1
    I'm totally with @Criggie on this one. If your machine has a modern amount of memory, best to not let it thrash forever if something goes haywire. If you need more swap for something specifically, you can always temporarily swapon some more. – sudo Jul 07 '17 at 19:41
  • @Criggie you have to be careful about overcommit settings though. – GnP Jul 09 '17 at 05:25
7

One way to get a chunk of free RAM on a short notice is to use zram, which creates a compressed RAM disk and swaps there. With any half-decent CPU, this is much faster than regular swap, and the compression rates are pretty high with many modern RAM hogs like web browsers.

Assuming you have zram installed and configured, all you have to do to is run

sudo service zramswap start
3

Another things that one could do is to free up memory page cache via this command:

echo 3 | sudo tee /proc/sys/vm/drop_caches

From kernel.org documentation (emphasis added):

drop_caches

Writing to this will cause the kernel to drop clean caches, as well as reclaimable slab objects like dentries and inodes. Once dropped, their memory becomes free.

To free pagecache: echo 1 > /proc/sys/vm/drop_caches To free reclaimable slab objects (includes dentries and inodes): echo 2 > /proc/sys/vm/drop_caches To free slab objects and pagecache: echo 3 > /proc/sys/vm/drop_caches

This is a non-destructive operation and will not free any dirty objects. To increase the number of objects freed by this operation, the user may run `sync' prior to writing to /proc/sys/vm/drop_caches. This will minimize the number of dirty objects on the system and create more candidates to be dropped.

Sergiy Kolodyazhnyy
  • 105,154
  • 20
  • 279
  • 497
  • Interesting... care to explain that command logic? – Anon Jul 03 '17 at 15:50
  • 1
    @Akiva basically this tells the Linux kernel to free up the RAM. This doesn't get rid of the cause , which is killing the offending process, so Oli's answer is the solution to the problem. Dropping caches will prevent your system from running out of memory, therefore prevent freezing out, thus buying you time to figure out the actual issue. This probably will be a bit faster than making a swap file, especially if you're on hard drive and not on SSD – Sergiy Kolodyazhnyy Jul 03 '17 at 16:29
  • 7
    The cache is the first thing to go when you fill up memory, so I don't think this will help very much. In fact, I don't think this command has a practical use outside of debugging kernel behaviour or timing disk access optimisations. I would humbly recommend against running this command on any system in need of more performance. – Score_Under Jul 03 '17 at 17:51
  • 2
    @Score_Under - "The cache is the first thing to go when you fill up memory" -- well, that depends on your setting in /proc/sys/vm/swappiness. With swappiness set to 0, you're right. With the default setting of 60, you're close. With it set to 200, however, it'll be the least recently-used pages of running processes that get dropped first... in that particular case, this command may be useful. But setting swappiness to 0 (or some low value, maybe 20 or 30) would be a better general approach, however. – Jules Jul 04 '17 at 08:46
  • 3
    @Score_Under This command was useful on old kernels with kswapd bug (some people even created cronjobs with it). But you're right, I doubt it will help with this question. – Dmitry Grigoryev Jul 04 '17 at 12:52
  • Once you're at the level where you handle things in such ways, get rid of the awkward sudo crutch :) – rackandboneman Jul 06 '17 at 19:49
  • @rackandboneman what do you mean ? – Sergiy Kolodyazhnyy Jul 06 '17 at 19:51
3

sudo swapoff -a will disable the swap, making the kernel automatically kill the process with the highest score if the system runs out of memory. I use this if I know I'll be running something RAM-heavy that I'd rather kill if it goes out of control than let it go into swap and get stuck forever. Use sudo swapon -a to re-enable it afterwards.

Later, you may want to take a look at your swap settings. Sounds like your swap is on the same disk as the root partition, which would slow down your system when you hit swap, so avoid that if you can. Also, in my opinion, modern systems often get configured with too much swap. 32GiB RAM usually means 32GiB swap is allocated by default, as if you really want to put 32GiB into your swap space.

sudo
  • 131
  • 7
1

Recently I found a solution to my problem.

Since the Linux OOM killer isn't able to do its job properly, I started using a userspace OOM Killer: earlyoom. It's written in C, fairly configurable and it's working like a charm for me.

1

You said "compiling in the background". What are you doing in the foreground? If its you are developing with Eclipse or other resource heavy IDE, check if everything is properly terminated in the console.

Development environments often allow to start multiple processes under development, these may stay hanging also after you are no longer interested in them (in debugger, or just not properly finished). If the developer does not pay attention, tens of forgotten processes may accumulate during the day, using multiple gigabytes together.

Check if everything that should be terminated in IDE is terminated.

h22
  • 196