86

I have seen in several site which recommend to reduce swappiness to 10-20 for better performance.

Is it a myth or not? Is this a general rule? I have a laptop with 4GB Ram and 128GB SSD hard, what value do you recommend for my swappiness?

Thanks.

Jorge Castro
  • 71,754

7 Answers7

122

Because most believe that swapping = bad and that if you don't reduce swappiness, the system will swap when it really doesn't need to. Neither of those are really true. People associate swapping with times where their system is getting bogged down - however, it's mostly swapping because the system is getting bogged down, not the other way around. When the system swaps, it will have already factored the performance cost in to its decision to swap, and decided that not doing so would have a greater overall penalty in system performance or stability.

Overall the default settings result in good overall performance and stability. I'd recommend leaving it at the default. There are further avenues for Linux to improve its memory management to solve some edge cases, but by and large the swappiness control isn't a good workaround - adjust it in one direction and you may fix one issue and create other issues. If at all possible, simply installing more physical RAM (and leaving swappiness alone) eclipses all other remedies.

How Linux uses RAM

Any RAM that isn't being used by applications may be used as "cache". Cache is important for a fast, smooth running system, speeding up both reads and writes to disk.

If your applications increase their memory use to the point they are using almost all your RAM, your cache will shrink and on average disk operations will slow down as a result. It's not enough to have just tens of megabytes, or less, for cache nowadays.

If applications increase their memory use even further - assuming you have no swap space - you will not only have no space for cache but you will eventually run out of memory and your system will have to kill running processes. Killing processes is worse than a slow down as it gives you an unstable, unpredictable system.

How Linux uses swap

To combat both of these problems, your system can re-allocate some seldom-used application memory to the swap space on your disk, freeing RAM. The additional RAM can prevent processes dying due to running out of memory, and can reclaim a little cache so disk operations can operate more smoothly.

This re-allocation isn't done according to a definite cutoff though. You don't reach a certain percentage of allocation after which Linux starts swapping. It has a "fuzzy" algorithm. It takes a lot of things into account, which can best be described by "how much pressure is there for memory allocation". If there is a lot of "pressure" to allocate new memory, then it will increase the chances some will be swapped to make more room. If there is less "pressure" then it will decrease these chances.

Your system has a "swappiness" setting which helps you tweak how this "pressure" is calculated. It's often falsely represented as a "percentage of RAM" but it's not, it's just a value that is used as part of the formula. Values around 40 to 60 are the recommended sane values, 60 being default nowadays.

Letting your system swap when it has to is overall a very good thing, even if you have a lot of RAM. Letting your system swap if it needs to gives you peace of mind that if you ever run into a low memory situation even temporarily (while running a short process that uses a lot of memory), your system has a second chance at keeping everything running. If you go so far as to disable swapping completely, then you risk processes being killed due to not being able to allocate memory.

What is happening when the system is bogged down and swapping heavily?

Swapping is a slow and costly operation, so the system avoids it unless it calculates that the trade-off in cache performance will make up for it overall, or if it's necessary to avoid killing processes.

A lot of the time people will look at their system that is thrashing the disk heavily and using a lot of swap space and blame swapping for it. That's the wrong approach to take. If swapping ever reaches this extreme, it means that swapping is your system's attempt to deal with low memory problems, not the cause of the problem, and that without swapping your running process will just randomly die.

What about desktop systems? Don't they require a different approach?

Users of a desktop system do indeed expect the system to "feel responsive" in response to user-initiated actions such as opening an application, which is the type of action that can sometimes trigger a swap due to the increase in memory required.

One way some people try to tweak this is to reduce the swappiness parameter which can increase the system's tolerance to applications using up memory and running low on cache space.

However, this is just shifting goalposts. The first application may now load without a swap operation, but it will leave less slack for the next application that loads. The same swapping may just occur later, when you next open an application instead. In the meantime, the system performance is lower overall due to the reduced cache size. Thus, any benefit from the reduced swappiness setting may be hard to measure, reducing swapping delay at some times but causing other slow performance at other times. Reducing swappiness a little may be justified if you know what you're doing, but reducing it to as low as 10 can leave the system tolerant to very low cache sizes and leave the system more liable to have to swap at short notice.

Disabling swap completely should be avoided as you lose the added protection against out-of-memory conditions which can cause processes to crash or be killed.

The most effective remedy by far is to install more RAM if you can afford it.

Can swap be disabled on a system that has lots of RAM anyway?

If you have far more RAM than you're likely to need for applications, then you'll rarely need swap. Therefore, disabling swap probably won't make a difference in all usual circumstances. But if you have plenty of RAM, leaving swap enabled also won't have any penalty because the system doesn't swap when it doesn't need to.

The only situations in which it would make a difference would be in the unlikely situation the system finds itself running out of memory and consequently the cache system is getting hampered, and it's in this type of situation where you would want swap most. So you can safely leave swap on its normal settings for added peace of mind without it ever having a negative effect when you have plenty of memory.

But how can swap speed up my system? Doesn't swapping slow things down?

The act of transferring data from RAM to swap is a slow operation, but it's only taken when the kernel is pretty sure the overall benefit as a result of keeping a reasonable cache size will outweigh this. If your system is getting really slow as a result of disk thrashing, swap is not causing it but only trying to alleviate it.

Once data is in swap, when does it come out again?

Any given part of memory will come back out of swap as soon as it's used - read from or written to. However, typically the memory that is swapped is memory that has not been accessed in a long time and is not expected to be needed soon.

Transferring data out of swap is about as time-consuming as putting it in there. Your kernel won't remove data from it if it doesn't need to. While data is in swap and not being used, it leaves more memory for other things that are being used, and more system cache.

Are there any cases where reducing swappiness is appropriate?

Yes. If you are running a server dedicated to one particular server application that does not benefit from system cache. Some database servers such as Oracle server, MySQL/MariaDB recommend in some cases reducing swappiness to 1 to 10 as these database engines use their own caching.

Note that this is true only if your system is dedicated to that one task, and in the case of MySQL/MariaDB only if you are using purely InnoDB or XtraDB, and not MyISAM or Aria, etc. If the dedicated purpose of your system is centered around an application that does its own caching and does not benefit from system cache, lowering swappiness can be a good idea.

thomasrutter
  • 36,774
  • 2
    Thank you for your thorough description. I think in my case(4GB Ram and 128GB SSD hard) and with my usage (Java EE development and several os in vitual box) swappiness=20 is suitable. What do you think? – Saeed Zarinfam Sep 05 '12 at 05:55
  • 1
    I think the default of 60 would be best, in my opinion. – thomasrutter Sep 05 '12 at 06:10
  • If i remember correctly default ubuntu is 60, at 8 gb ram sometimes Ubuntu take a little ram, when set 10 or lower Ubuntu won't take any. http://namhuy.net/1563/how-to-tweak-and-optimize-ssd-for-ubuntu-linux-mint.html – Blanca Higgins Jun 07 '14 at 17:30
  • 4
    @BlancaHiggins did you read the post you commented on? Your comment doesn't seem to describe what swappiness actually does. – thomasrutter Jun 08 '14 at 04:45
  • 3
    This is an excellent answer. Thank you so much for such a great explanation. – Dan Barron Dec 10 '15 at 04:16
  • and take a look at this too, https://help.ubuntu.com/community/SwapFaq#What_is_swappiness_and_how_do_I_change_it.3F – azerafati May 17 '16 at 12:46
  • 2
    It may not look like it but as a result of "wall of text" accusations I've made an effort to simply this answer a lot while still retaining the relevant information. – thomasrutter Dec 18 '17 at 05:12
  • 1
    Glad to see this answer here... I've seen so many ppl lately claim that adding swap files reduces performance. – Rondo Sep 15 '18 at 18:04
  • 6
    Part of the info in that SwapFaq is misleading in my opinion: that setting it to 100 will "aggressively" swap. I think it's more accurate to say that is a very cautious, pro-active setting, swapping at the first sign that the available memory or cache is getting even a little bit low. Whereas low settings like 10 are more of a risky, thrillseeking setting, avoiding doing any swapping until available memory is very low and the cache is pretty much completely gone, leaving the system without much wiggle room. – thomasrutter Feb 21 '19 at 22:35
  • 2
    The official documentation doesn't explicitly state this, but I believe that 100 is not the maximum value for swappiness, and that it is not a "percentage" as many profess. I haven't tried, but values over 100 may result in even earlier swapping. Not that I'd recommend them. – thomasrutter Jul 15 '19 at 23:57
  • This answer seems to contradict my actual experience. After a runaway process causes my laptop to write a few gigabytes to swap, it takes more than a few days for the browser to recover to normal speeds. Running swapoff -a; swapon -a (when it finally completes) immediately restores browser speed. Can you add an explanation for this phenomenon? Your claim that memory that is needed is rapidly restored seems patently false. – David Roundy Mar 27 '20 at 12:51
  • @DavidRoundly it is not possible for memory that is in swap to be read from or written to. It must be swapped back out to be used. That said, this only happens on a per-page basis so it is possible that what you're experiencing is that in the following days, you're accessing different sets of those swapped pages and freeing those piecemeal. In such a case swapoff does all this in one go - it probably takes a while but you'll go back to feeling like before the event where you ran out of memory. Swap is not the cause. You can't tune swap to have no penalty when you fully run out of memory. – thomasrutter Mar 28 '20 at 09:54
  • ... but in your specific case swapoff can prevent later lagginess by imposing the full penalty of swapping out in one go instead of piece by piece at random later times. – thomasrutter Mar 28 '20 at 09:59
22

On a usual desktop, you have 4-5 active tasks that consume 50-60% of memory. If you set swappiness to 60, then about 1/4-1/3 of the ACTIVE task pages will be swapped out. That means, for every task change, for every new tab you opened, for every JS execution, there will be a swapping process.

The solution is to set swappiness to 10. By practical observations, this causes the system to give up disk io cache (that plays little to no role on desktop, as read/write cache is virtually not used at all. Unless you constantly copying LARGE files) instead of pushing anything into swap. In practice, that means system will refuse to swap pages, cutting io cache instead, unless it hits 90% used memory. And that in turn means a smooth, swapless, fast desktop experience.

On the file server, however, I would set swappiness to 60 or even more, because server does not have huge active foreground tasks that must be kept in the memory as whole, but rather a lot of smaller processes that are either working or sleeping, and not really changing their state immediately. Instead, server often serves (pardon) the exact same data to clients, making disk io caches much more valueable. So on the server, it is much better to swap out the sleeping processes, freeing memory space for disk cache requests.

On desktops, however, this exact setting leads to swapping out blocks of memory of REAL applications, that near constantly modify or access this data.

Oddly enough, browsers often reserve large chunks of memory, that they constantly modify. When such chunks are swapped out, it takes a while if they are requested back - and at the same time, browser goes forth updating its caches. Which causes huge latencies. In practice, you will be sitting 2 minutes waiting for the single web page in a new tab to load.

Desktop does not really care about disk io, because desktop rarely reads and writes cacheable repeating big portions of data. Cutting on disk io in order to just prevent swaping so much as possible is much more favorible for desktop, than to have 30% of memory reserved for disk cache with 30% of RAM (full of blocks belonging to actively used applications) swapped out.

Just launch htop, open a browser, GIMP, LibreOffice - load few documents there and then browse for several hours. Its really that easy.

terdon
  • 100,812
Linux dude
  • 237
  • 2
  • 2
  • 5
    +1 for server vs desktop differences description. Server's disk cache could be done on a disk field. – Dee Dec 11 '14 at 15:21
  • 1
    If this is the case, why do both server and desktop versions of Ubuntu default to a swappiness of 60? If what you state is true, then it would make more sense for the desktop version to be provided with a default of 20 or even 10, but it is not. – JAB Nov 08 '17 at 04:19
  • 1
    Reference for the implication that swappiness is a direct percentage of ram that gets swapped? I don't think it works like that. – Xen2050 Mar 23 '18 at 10:05
  • 4
    It doesn't. Swappiness is not related to a percentage of RAM. It's a knob that tweaks a fuzzy algorithm towards being more or less likely to swap in a given problem situation. I also think the description of server vs desktop workloads in this answer makes a bunch of assumptions that don't always hold. – thomasrutter Oct 25 '18 at 23:57
  • 6
    This answer is unclear and riddled with false assumptions and assertions. The currently accepted answer is much more accurate and I would caution anyone taking anything from this answer as read. – agittins Sep 04 '20 at 09:21
  • +1 for the word pun: the "server serves data". ;) – loved.by.Jesus Feb 11 '22 at 16:10
  • Swappiness is just flat out NOT related to a "percentage of RAM" as this answer claims. The entire premise behind this answer is wrong before even getting to wild inaccuracies like cache not mattering for desktop performance. – thomasrutter Aug 14 '23 at 02:46
12

If you run a Java server on your Linux system you should really consider reducing swappiness by much from the default value of 60. So 20 is indeed a good start. Swapping is a killer for a garbage collecting process because collections each time need to touch large parts of the process memory. The OS does not have the means to detect such processes and get things right for them. It is best practice to avoid swapping as much as you possibly can for productive application servers.

Andreas
  • 129
  • 1
  • 2
  • 1
    It's true that if you dedicate a server to a specialized workload that you know won't benefit from system cache (like a database server) then reducing swappiness might make sense. I don't think that garbage collection is a specialized enough case though. If memory is touched frequently it's not going to be swapped, it'll be kept in physical RAM. The only time this isn't the case is if you have a severe low memory situation - and swapping is not responsible. – thomasrutter Nov 12 '18 at 23:23
  • 1
    Usually all JAVA garbage collectors are generational - there are Young Generation (~1/3 of heap) collected and allocation pool used very often and Old generation (~2/3 of heap) touched relatively rarely by GC, so it seems Old Generation pages could be swapped without any problems – ALZ Nov 22 '19 at 14:08
7

I would suggest doing some experiments whilst having system monitor open to see exactly how much load your machine is under, I am also running with 4GB of memory and a 128GB SSD so changed the swappiness value to 10 which not only improved performance whilst under load but as a bonus will also increase the life of the SSD drive as it will suffer less writes.

For a simple video tutorial on how to do this with a full explanation see the YouTube video below

http://youtu.be/i6WihsFKJ7Q

Tech-Compass
  • 71
  • 1
  • 2
  • 1
    Great video that you made, but the video doesn't really answer the question directly, its more of a howto on changing swappiness. – jmunsch Jun 07 '14 at 17:00
  • 1
    +1 for SSD life hint, for SSD is best if system is as much as possible read-only, rest should stay in memory and today, memory is usually not a big problem on current desktop PCs. – Dee Dec 11 '14 at 14:59
5

I want to add some perspective from a Big Data Performance engineer to give others more background on 2017 technology.

My personal experience is that while I have typically disabled swapping to guarantee that my systems are running at max speed, on my workstation for a specific problem, I have found that swappiness of 1 and 10 leads to freezing (forever) and long pauses. Swappiness of 80 for this particular application leads to much better performance and shorter pauses than the default (60). Note that I had 8GB RAM and 4x 256GB of swap backed by 1 HDD. I would normally state precise statistics seen in my benchmarks and the full hardware specs, but I haven't done any yet and it's a recent low-end desktop that is not important here.

Back at my former company, the reason we did not enable swappiness on Spark servers with [500GB to 4TB] x [10-100] nodes is that we saw poor performance as a sign to redesign the data pipeline and data structures in a more efficient manner. We also did not want to benchmark the HDDs/SSDs. Also, swapping that much RAM would need 10-30 disks per node with parallel writes to minimize disk access time.

Today, 20 years ago and 20 years in the future, the case will still remain that some problems are too large for the RAM. With infinite time and money, we can buy/lease more hardware or redesign any process to get the performance to a desirable level. Swapping is just a hack to allow us to ignore the real problem (we don't have enough ram and we don't want to spend more money).

For those that think higher swappiness is a bad advice, here is a little perspective. In the past, HDs had just a few kb of cache if any. The interface was IDE/Parallel ATA. The CPU bus was also much slower along with RAM and many other things. In short, systems were very slow (relative to today) in every way. A couple years ago, HDDs used SATA3. Today, they use the NVMe protocol, which has significant latency improvements. HDs have many MB of cache. And the most interesting part is when you use a modern SSD (much more stable read/write endurance and perf) with NVMe or PCIe as your swap storage. It's the best compromise between cost and performance. Please do not try this with cheap or old SSDs.

Swap+SSDs! With high-performance volatile storage, I would highly recommend experimenting with a high swappiness value. It mainly depends on the memory access patterns (randomly accessing all memory vs rarely accessing most), memory usage, if the disk bandwidth is already saturated, and the actual cost of thrashing.

ldmtwo
  • 150
2

A personal anecdote. I didn't know about this swappiness and in hindsight it might have fixed my problem. My system is old and RAM was 4GB.

I upgraded my Linux OS to the next latest long term support version. That version was "passively" using more RAM. This made my system use more swap. The system started boggling down because the swap is on HDD.

Looking at the stats the RAM and the swap combined was not greater than my max RAM. The problem was partially as 'Linux dude' mentioned that browsers often reserve large chunks of memory, that they constantly modify. So I was using Firefox (YouTube particularly is heavy) and due to that large chunks were going into swap but were actually needed.

I ended up getting more RAM which did solve my problem but it might have been possible to postpone the RAM buying if I tried putting swappiness at lower value. I don't regret buying the RAM it was a good upgrade but not every one can make an upgrade.

h3dkandi
  • 181
  • 6
1

It could be that a lot of the perceived swapping behaviour on startup or on opening programs is linux reading configuration files etc. from disk. So it maybe best to look using the system monitor program before assuming that the hard drive access is due to swapping.

Seth
  • 58,122