Remix.run Logo
dspillett 13 hours ago

> There’s a common rule of thumb that says you should have swap space equal to some multiple of your RAM.

That rule came about when RAM was measured in a couple of MB rather than GB, and hasn't made sense for a long time in most circumstances (if you are paging our a few GB of stuff on spinning drives your system is likely to be stalling so hard due to disk thrashing that you hit the power switch, and on SSDs you are not-so-slowly killing them due to the excess writing).

That doesn't mean it isn't still a good idea to have a little allocated just-in-case. And as RAM prices soar while IO throughput & latency are low, we may see larger Swap/RAM ratios being useful again as RAM sizes are constrained by working-sets aren't getting any smaller.

In a theoretical ideal computer, which the actual designs we have are leaky-abstraction laden implementations of, things are the other way around: all the online storage is your active memory and RAM is just the first level of cache. That ideal hasn't historically ended up being what we have because the disparities in speed & latency between other online storage and RAM have been so high (several orders of magnitude), fast RAM has been volatile, and hardware & software designs or not stable & correct enough such that regular complete state resets are necessary.

> Why? At that point, I already have the same total memory as those with 8 GB of RAM and 8 GB of swap combined.

Because your need for fast immediate storage has increased, so 8-quick-8-slow is no longer sufficient. You are right in that this doesn't mean you need 16-quick-16-slow is sensible, and 128-quick-128-slow would be ridiculous. But no swap at all doesn't make sense either: on your machine imbued with silly amounts of RAM are you really going to miss a few GB of space allocated just-in-case? When it could be the difference between slower operation for a short while and some thing(s) getting OOM-killed?

man8alexd 13 hours ago | parent [-]

Swap is not a replacement for RAM. It is not just slow. It is very-very-very slow. Even SSDs are 10^3 slower at random access with small 4K blocks. Swap is for allocated but unused memory. If the system tries to use swap as active memory, it is going to become unresponsive very quickly - 0.1% memory excess causes a 2x degradation, 1% - 10x degradation, 10% - 100x degradation.

AtlasBarfed 9 hours ago | parent [-]

What is allocated but unused memory? That sounds like memory that will be used in the near future and we are scheduling in an annoying disk load when it is needed

You are of course highlighting the problem that virtual addressing was intended to over abstract memory resource usage, but it provides poor facilities for power users to finely prioritize memory usage.

The example of this is game consoles, which didn't have this layer. Game writers had to reserve parts of ram fur specific uses.

You can't do this easily in Linux afaik, because it is forcing the model upon you.

man8alexd 9 hours ago | parent [-]

Unused or Inactive memory is memory that hasn't been accessed recently. The kernel maintains LRU (least recently used) lists for most of its memory pages. The kernel memory management works on the assumption that the least recently used pages are least likely to be accessed soon. Under memory pressure, when the kernel needs to free some memory pages, it swaps out pages at the tail of the inactive anonymous LRU.

Cgroup limits and OOM scores allow to prioritize memory usage on a per-process and per-process group basis. madvise(2) syscall allows to prioritize memory usage within a process.