▲ | kijin 4 days ago | ||||||||||||||||||||||||||||||||||||||||
For modern Linux servers with large amounts of RAM, my rule of thumb is between 1/8 and 1/32 of RAM, depending on what the machine is for. For example, one of my database servers has 128GB of RAM and 8GB of swap. It tends to stabilize around 108GB of RAM and 5GB of swap usage under normal load, so I know that a 4GB swap would have been less than optimal. A larger swap would have been a waste as well. | |||||||||||||||||||||||||||||||||||||||||
▲ | ChocolateGod 3 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
I no longer use disk swap for servers, instead opting for Zram with a maximum is 50% of RAM capacity and a high swapiness value. It'd be cool if Zram could apply to the RAM itself (like macOS) rather than needing a fake swap device. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | man8alexd 4 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
The proper rule of thumb is to make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens. Another rule of thumb is that performance degradation due to the active working set spilling into the swap is exponential - 0.1% excess causes 2x degradation, 1% - 10x degradation, 10% - 100x degradation (assuming 10^3 difference in latency between RAM and SSD). | |||||||||||||||||||||||||||||||||||||||||
|