| ▲ | jandrewrogers 5 hours ago | |
Static allocation has been around for a long time but few people consider it even in contexts where it makes a lot of sense. I’ve designed a few database engines that used pure static allocation and developers often chafe at this model because it seems easier to delegate allocation (which really just obscures the complexity). Allocation aside, many optimizations require knowing precisely how close to instantaneous resource limits the software actually is, so it is good practice for performance engineering generally. Hardly anyone does it (look at most open source implementations) so promoting it can’t hurt. | ||
| ▲ | wahern 27 minutes ago | parent [-] | |
I've always thought static allocation was why we got overcommit[1] in Linux and its infamous OOM killer. In the 1990s big boy commercial databases assumed specialized admins, and one of their tasks was to figure out the value for the memory allocation setting in the DB configuration, which the DB would immediately allocate on startup. As a magic value, the easiest path forward was just to specify most of your RAM. DBs used to run on dedicated machines, anyhow. But then Linux came along and democratized running servers, and people wanted to run big boy databases alongside other services like Apache. Without overcommit these databases wouldn't run as typically configured--"best practice" allocation advice used up too much memory, leaving nothing for the rest of the services, especially on the more memory-constrained machines people ran Linux. Because on a typical system most of the memory preallocated to the DB was never used anyhow (the figure wasn't actually carefully chosen as intended), or the DB was designed (or at least the manual's written) with bigger machines in mind, and Linus wanted things to Just Work, whether experienced admins or not, the easy fix was just to overcommit in the kernel, et voila, a pain point for people dabbling with Linux was solved, at least superficially. NB: I was just a newbie back then, so any older grey beards, please feel free to correct me. But I distinctly remember supporting commercial databases as being one of the justifications for overcommit, despite overcommit not being typical in the environments originally running those DBs, AFAIU. [1] Note that AFAIU the BSDs had overcommit, too, but just for fork + CoW. Though these days FreeBSD at least has overcommit more similar to Linux. Solaris actually does strict accounting even for fork, and I assume that was true back in the 90s. Did any commercial Unices actually do overcommit by default? | ||