Swappiness of 60 or 10?

Hi,
Because I was reading about swap-memory for my Raspberry Pi running NextCloud, I checked how the situation on my Framework (Fedora Bluefin) was.
Apparently it’s set to 60. In fora I read you can experiment safely and some suggested 10 or even 1 or 0. On the other hand, I would think that the programmers who decided to set the default to 60, have good reason to do so.
What are your thoughts about this?

TL;DR don’t change it, the people making these suggestions probably don’t understand what it does.

Swappiness doesn’t directly control how much swap is used, it sets a ratio of the types of memory that can be swapped. There are two types, file-backed memory that has been read from a file, and anonymous memory that programs have allocated. The file-backed memory can just be discarded, then read from the file again. Of course if memory has been changed, it must be written back to the file. The anonymous memory must be written to the swap file so that it can be read back when needed.
Swappiness is also just a suggestion to the kernel, which will try to do what it needs to survive a low memory situation.
If you still think you need to change it, you can experiment with different values over a week or two as you use your machine, and try to quantify any change in performance you see. Unless you’re running database servers, etc. you will probably only notice if performance drops dramatically.

more here

2 Likes

Thanks. I understood it isn’t just how much swap is used. The rest I didn’t know. I will not change it.
I did change it on the Raspberry Pi, because there I saw it being used even when not much really happened. I have the impression NextCloud and Jellyfin are a bit faster, but that could be pure self-suggestion.
Thanks for the link!

In most cases kernel parameters should not be changed and default settings should be used. Parameter: vm.swappiness means how aggressive should kernel be to swap memory to swap disk. 100 means that kernel should swap aggressively (but kernel decides when to do it) and 0 means do not swap at all. For databases it is suggested to set it to 1, almost never swap unless it is critically low memory and swapping may prevent system to get into out of memory situation.

For databases servers there are some other kernel parameters to change like vm.overcommit_memory = 2 to turn off over-commiting memory. Why? How do airlines companies book seats, they over-book the seats. They sell more tickets that are seats on plane, because from experience they know some of the customers cancels the flight and so by experience they rarely get into the state where all seats are taken. For airline companies it is less expensive to buy one or two customers a hotel room, then there are too many seats on plane that are empty. They are doing overbooking. What does have this in common with Linux? When ever some of the process requests it wants some memory e.g. 100 MB, Linux “by experience” know that all of the processes that request memory do not consume all of the requested memory, e.g. process requested 100 MB, consumed only 70 MB. In this case Linux kernel presents to the processes little bit more memory then there actually is (it does similar then airlines companies over-booking) in computer language there over-commit memory, present more memory to processes that actually there is physical memory available. In most of the cases this is perfectly fine, there are hundreds of processes that request small amount of memory, e.g. web server serving web pages to end-users. But… what does it happen when in rare cases when physical memory is actually consumed, then Linux kernel is not pleasant it gets into the out-of-memory-killing situation, when kernel looks for process that consume most of the memory and kill such a process. If there is database server installed on the computer (and for most of database servers it is set to consume larger amount of memory), one od database server process is perfect candidate to be killed by kernel. This and some more (depending on database vendor) kernel parameters are suggested to be changed.

But lets step few steps back first… In order to change kernel parameters, we need to ask ourselves:

  • is there a software vendor that suggest changing kernel parameters,
  • is there a documentation that points on changing kernel parameters,
  • did we measure that we can get low on memory and to get into overcommiting memory issue or swappines issue?

It is wise to some measurements, like executing vmstat 1 command and observe the “swap in” and “swap out” columns. Are they in most cases 0? If yes, then there is no swapping in your case, if there are larger numbers then swapping appears and changing swappiness may help. To check the overcommiting probability check free -h and check the available column, is there almost 0 bytes? Then you have potential problem of overcommiting… But if you are getting near of out-of-memory issue, maybe you have too small amount of memory and buying additional memory is wise thing to do or you are running to many processes on your machine, optimize (e.g. reduce) number of processes you are running on your machine.


But… this kernel parameters are nothing specific to Blufin, and you may get better advises on your special case on some upstream forum… depending what is your issue in the first place.

1 Like

Thanks, that was a very helpful explanation.
I don’t experience any issues on my Bluefin installation, I already decided to leave it as is. I was checking some “advice” found on forums.

On my Raspberry Pi it seems to help. Since NextCloud uses a database (and Jellyfin as well I guess) that can explain it’s useful there. So I will look into the overcomit-thing too.