▲ | creshal 3 days ago | |
> But wouldn't it be better to avoid writing such programs? Yes, indeed, the world would be a better place if we had just stopped writing Java 20 years ago. > And how many memory such daemons can consume? A couple of hundred megabytes total? Consider the average Java or .net enterprise programmer, who spends his entire career gluing together third-party dependencies without ever understanding what he's doing: Your executable is a couple hundred megabytes already, then you recursively initialize all the AbstractFactorySingletonFactorySingletonFactories with all their dependencies monkey patched with something worse for compliance reasons, and soon your program spends 90 seconds simply booting up and sits at two or three dozen gigabytes of memory consumption before it has served its first request. > Is it really that much on modern systems? If each of your Java/.net business app VMs needs 50 or so gigabytes to run smoothly, you can only squeeze ten of them in an 1U pizza box with a mere half terabyte RAM; while modern servers allow you to cram in multiple terabytes, do you really want to spend several tens of thousands of dollars on extra RAM, when swap storage is basically free? Cloud providers do the same math, and if you look at e.g. AWS, swap on EBS costs as much per month as the same amount of RAM costs per hour. That's almost three orders of magnitude cheaper. > When I program, my application may sometimes allocate a lot of memory due to some silly bug. Yeah, that's on you. Many, many mechanism let you limit the per-process memory consumption. But as TFA tries to explain, dealing with this situation is not the purpose of swap, and never has been. This is a pathological edge case. > almost all used memory is now in swap and the whole system works snail-slow, presumably because kernel doesn't think it should really unswap previously swapped memory and does this only on demand and only page by page. This requires multiple conditions to be met - the broken program is allocating a lot of RAM, but not quickly enough to trigger the OOM killer before everything has been swapped out - you have a lot of swap (do you follow the 1990s recommendation of having 1-2x the RAM amount as swap?) - the broken program sits in the same cgroup as all the programs you want to keep working even in an OOM situation Condition 1 can't really be controlled, since it's a bug anyway. Condition 2 doesn't have to be met unless you explicitly want it to. Why do you? Condition 3 is realistically on desktop environments, despite years of messing around with flatpaks and snaps and all that nonsense they're not making it easy for users to isolate programs they run that haven't been pre-containerized. But simply reducing swap to a more realistic size (try 4GB, see how far it gets you) will make this problem much less dramatic, as only parts of the RAM have to get flushed back. > I a hypothetical case without swap this case isn't so painful. When main system memory is almost fully consumed, OOM killer kills the most memory hungry program and all other programs just continue working as before. And now you're wasting RAM that could be used for caching file I/O. Have you benchmarked how much time you're wasting through that? > I think that overall reliance on swap is noways just a legacy of old times when main memory was scarce and back than it maybe was useful to have swap. No, you just still don't understand the purpose of swap. Also, "old times"? You mean today? Because we still have embedded environments, we have containers, we have VMs, almost all software not running on a desktop is running in strict memory constraints. > and kernel code may be simpler (all this swapping code may be removed) So you want to remove all code for file caching? Bold strategy. |