Remix.run Logo
shatsky 3 days ago

Author pushes abstract idea about "page reclamation" in front of ideas of performance, reliability and controllable service degradation which people actually want; because author believes that it is the one and only solution to them; and then defends swap because it is good for it.

No, this is just plain wrong. There are very specific problems which happen when there is not enough memory.

1. File-backed page reads causing more disk reads, eventually ending with "programs being executed from disk" (shared libraries are also mmaped) which feels like system lockup. This does not need any "egalitarian reclamation" abstraction and swap, and swap does not solve it. But it can be solved simply by reserving some minimal amount of memory for buf/cache, with which system is still responsive. 2. Eventually failure to allocate more memory for some process. Any solutions like "page reclamation" with pushing unused pages to some swap can only increase maximum amount of memory which can be used before it happens, from one finite value to bigger finite value. When there is no memory to free without losing data, some process must be killed. Swap does not solve this. The least bad solution would be to warn user in advance and let them choose processes to kill.

See also https://github.com/hakavlad/prelockd

man8alexd 3 days ago | parent [-]

Neither executables nor shared libraries are going to be evicted if they are in active use and have the "accessed" bit set in their page tables. This code has been present in the kernel mm/vmscan.c at least since 2012.

shatsky 3 days ago | parent [-]

Will look into that again. If you're right about unevictability of these pages, what is the mechanism which causes sudden extreme degradation of performance when system is almost out of memory due to some app gradually consuming it, from quite responsive system to totally unresponsive system which can stay stuck with trashing disk for ages until oom will fire?

man8alexd 3 days ago | parent [-]

Once your active working set starts spilling into swap, the performance degradation goes exponential. The difference in latency between RAM and SSD is orders of magnitude. Assuming 10^3 difference, 0.1% memory excess causes 2x degradation, 1% - 10x degradation, 10% - 100x degradation.