Remix.run Logo
c0l0 3 days ago

I realize this is mostly tangential to the article, but a word of warning for those who are about to mess with overcommit for the first time: In my experience, the extreme stance of "always do [thing] with overcommit" is just not defensible, because most (yes, also "server") software is just not written under the assumption that being able to deal with allocation failures in a meaningful way is a necessity. At best, there's an "malloc() or die"-like stanza in the source, and that's that.

You can and maybe even should disable overcommit this way when running postgres on the server (and only a minimum of what you would these days call sidecar processes (monitoring and backup agents, etc.) on the same host/kernel), but once you have a typical zoo of stuff using dynamic languages living there, you WILL blow someone's leg off.

kg 3 days ago | parent | next [-]

I run my development VM with overcommit disabled and the way stuff fails when it runs out of memory is really confusing and mysterious sometimes. It's useful for flushing out issues that would otherwise cause system degradation w/overcommit enabled, so I keep it that way, but yeah... doing it in production with a bunch of different applications running is probably asking for trouble.

Tuna-Fish 3 days ago | parent | next [-]

The fundamental problem is that your machine is running software from a thousand different projects or libraries just to provide the basic system, and most of them do not handle allocation failure gracefully. If program A allocates too much memory and overcommit is off, that doesn't necessarily mean that A gets an allocation failure. It might also mean that code in library B in background process C gets the failure, and fails in a way that puts the system in a state that's not easily recoverable, and is possibly very different every time it happens.

For cleanly surfacing errors, overcommit=2 is a bad choice. For most servers, it's much better to leave overcommit on, but make the OOM killer always target your primary service/container, using oom-score-adj, and/or memory.oom.group to take out the whole cgroup. This way, you get to cleanly combine your OOM condition handling with the general failure case and can restart everything from a known foundation, instead of trying to soldier on while possibly lacking some piece of support infrastructure that is necessary but usually invisible.

MrDrMcCoy 2 days ago | parent | next [-]

There's also cgroup resource controls to separately govern max memory and swap usage. Thanks to systemd and systemd-run, you can easily apply and adjust them on arbitrary processes. The manpages you want are systemd.resource-control and systemd.exec. I haven't found any other equivalent tools that expose these cgroup features to the extent that systemd does.

b112 2 days ago | parent [-]

I really dislike systemd, and its monolithic mass of over-engineered, all encompassing code. So I have to hang a comment here, showing just how easy this is to manage in a simple startup script. How these features are always exposed.

Taken from a SO post:

  # Create a cgroup
  mkdir /sys/fs/cgroup/memory/my_cgroup
  # Add the process to it
  echo $PID > /sys/fs/cgroup/memory/my_cgroup/cgroup.procs
  
  # Set the limit to 40MB
  echo $((40 \* 1024 \* 1024)) > /sys/fs/cgroup/memory/my_cgroup/memory.limit_in_bytes
Linux is so beautiful. Unix is. Systemd is like a person with makeup plastered 1" thick all over their face. It detracts, obscures the natural beauty, and is just a lot of work for no reason.
ece 2 days ago | parent | prev [-]

This is a better explanation and fix than others I've seen. There will be differences between desktop and server uses, but misbehaving applications and libraries exist on both.

vin10 3 days ago | parent | prev [-]

> he way stuff fails when it runs out of memory is really confusing

have you checked what your `vm.overcommit_ratio` is? If its < 100%, then you will get OOM kills even if plenty of RAM is free since the default is 50 i.e. 50% of RAM can be COMMITTED and no more.

curious what kind of failures you are alluding to.

kg 2 days ago | parent [-]

The main scenario that caused me a lot of grief is temporary RAM usage spikes, like a single process run during a build that uses ~8gb of RAM or more for a mere few seconds and then exits. In some cases the oom killer was reaping the wrong process or the build was just failing cryptically and if I examined stuff like top I wouldn't see any issue, plenty of free RAM. The tooling for examining this historical memory usage is pretty bad, my only option was to look at the oom killer logs and hope that eventually the culprit would show up.

Thanks for the tip about vm.overcommit_ratio though, I think it's set to the default.

PunchyHamster 2 days ago | parent [-]

you can get statistics off cgroups to get idea what it was (assuming it's a service and not something user ran), but that requires probing it often enough

bawolff 2 days ago | parent | prev [-]

> At best, there's an "malloc() or die"-like stanza in the source, and that's that.

In fairness, i don't know what else general purpose software is supposed to do here other than die. Its not like there is a graceful way to handle insufficient memory to run the program.

jenadine 2 days ago | parent [-]

In theory, a process could just return an error for that specific operation, which would propagate to a "500 internal error" for this one request but not impact other operations. Could even take the hint to free some caches.

But in practice, I agree with you. This is just not worth it. So much work to handle it properly everywhere and it is really difficult to test every malloc failures.

So that's where an OOM killer might have a better strategy than just letting the last program that happen to allocate memory last to fail.