Remix.run Logo
petcat 5 hours ago

The OP clearly states that he wants to know the earliest origin of the rule, and the only answers he gets are people giving their own opinions on how much swap space you should have.

Too bad because it's an interesting question that I would also like to know the answer to.

void-star 5 hours ago | parent [-]

Nope. Those are not the only answers I am seeing. I’m still curious though. 2x was nice because nobody really questioned it. Now that we have there doesn’t seem to be one “answer”. This is a fun/interesting question that comes up every now and then here and elsewhere :-) I suspect someone smarter than me about system tuning will have a much smarter and nuanced answer than “just use 2x”

kgwxd 5 hours ago | parent [-]

I thought the modern advice was you don't need it at all. No more spinning disks, so the there's no speed gain using the inner-most ring, and modern OSes manage memory in more advanced, and dynamic ways. That's what I choose to believe anyway, I don't need anymore hard choices when setting up Linux :)

klempner 4 hours ago | parent | next [-]

The main downside to not having swap is that Linux may start discarding clean file backed pages under memory pressure, when if you had swap available it could go after anonymous pages that are actually cold.

On a related note, your program code is very likely (mostly) clean file backed pages.

Of course, in the modern era of SSDs this isn't as big of a problem, but in the late days of running serious systems with OS/programs on spinning rust I regularly saw full blown collapse this way, like processes getting stuck for tens of seconds as every process on the system was contending on a single disk pagefaulting as they execute code.

anyfoo 4 hours ago | parent | prev | next [-]

I don't think that's correct. Having swap still allows you to page out rarely-used pages from RAM, and letting that RAM be used for things that positively impact performance, like caching actually used filesystem objects. Pages that are backed by disk (e.g. files) don't need that, but anonymous memory that e.g. has only been touched once and then never even read afterwards should have a place to go as well. Also, without swap space you have to write out file backed pages, instead of including anonymous memory in that choice.

For that reason, I always set up swap space.

Nowadays, some systems also have compression in the virtual memory layer, i.e. rarely used pages get compressed in RAM to use up less space there, without necessarily being paged out (= written to swap). Note that I don't know much about modern virtual memory and how exactly compression interacts with paging out.

SAI_Peregrinus an hour ago | parent | next [-]

That only helps if you don't have much free RAM. If you've got more free RAM than you need cache (including disk cache), swap only slows things down. With RAM prices these days, getting enough RAM is not worth it to avoid swap. IME on a desktop with 128GiB of RAM & Zswap I've never hit the backing store but have gone over 64GiB a few times. I wouldn't want to have pay to rebuild my desktop these days, 128GiB of ECC RAM was pricey enough in 2023!

fluoridation 3 hours ago | parent | prev [-]

Every time I've ran out of physical memory on Linux I've had to just reboot the machine, being unable to issue any kind of commands by input devices. I don't know what it is, but Linux just doesn't seem to be able to deal with that situation cleanly.

anyfoo 3 hours ago | parent [-]

The mentioned situation is not running out of memory, but being able to use memory more efficiently.

Running out of memory is a hard problem, because in some ways we still assume that computers are turing machines with an infinite tape. (And in some ways, theoretically, we have to.) But it's not clear at all which memory to free up (by killing processes).

If you are lucky, there's one giant with tens of GB of resident memory usage to kill to put your system back into a usable state, but that's not the only case.

fluoridation 3 hours ago | parent [-]

Windows doesn't do that, though. If a process starts thrashing the performance goes to shit, but you can still operate the machine to kill it manually. Linux though? Utterly impossible. Usually even the desktop environment dies and I'm left with a blinking cursor.

What good is it to get marginally better performance under low memory pressure at the cost of having to reboot the machine under extremely high memory pressure?

anyfoo 3 hours ago | parent [-]

In my experience the situations where you run into thrashing are rather rare nowadays. I personally wouldn't give up a good optimization for the rare worst case. (There's probably some knobs to turn as well, but I haven't had the need to figure that out.)

fluoridation 3 hours ago | parent [-]

Try doing cargo build on a large Rust codebase with a matching number of CPU cores and GBs of RAM.

anyfoo 3 hours ago | parent [-]

I believe that it's not very hard to intentionally get into that situation, but... if you notice it doesn't work, won't you just not? (It's not that this will work without swap after all, just OOM-kill without thrashing-pain.)

fluoridation 2 hours ago | parent [-]

I don't intentionally configure crash-prone VMs. I have multiple concerns to juggle and can't always predict with certainty the best memory configuration. My point is that Linux should be able to deal with this situation without shitting the bed. It sucks to have some unsaved work in one window while another has decided that now would be a good time to turn the computer unusable. Like I said before, trading instability for marginal performance gains is foolish.

anyfoo 2 hours ago | parent [-]

No argument there. I also always had the impression that Linux fails less gracefully than other systems.

vlovich123 3 hours ago | parent | prev [-]

It’s still beneficial so that unused data pages are evicted in favor of more disk cache