Remix.run Logo
dijit 7 hours ago

I've watched this pattern play out in systems administration over two decades. The pitch is always the same: higher abstractions will democratise specialist work. SREs are "fundamentally different" from sysadmins, Kubernetes "abstracts away complexity."

In practice, I see expensive reinvention. Developers debug database corruption after pod restarts without understanding filesystem semantics. They recreate monitoring strategies and networking patterns on top of CNI because they never learned the fundamentals these abstractions are built on. They're not learning faster: they're relearning the same operational lessons at orders of magnitude higher cost, now mediated through layers of YAML.

Each wave of "democratisation" doesn't eliminate specialists. It creates new specialists who must learn both the abstraction and what it's abstracting. We've made expertise more expensive to acquire, not unnecessary.

Excel proves the rule. It's objectively terrible: 30% of genomics papers contain gene name errors from autocorrect, JP Morgan lost $6bn from formula errors, Public Health England lost 16,000 COVID cases hitting row limits. Yet it succeeded at democratisation by accepting catastrophic failures no proper system would tolerate.

The pattern repeats because we want Excel's accessibility with engineering reliability. You can't have both. Either accept disasters for democratisation, or accept that expertise remains required.

krackers 5 hours ago | parent | next [-]

All abstractions are leaky abstractions. E.g. C is a leaky abstraction because what you type isn't actually what gets emitted (try the same code in two different compilers and one might vectorize your loop while the other doesn't).

walterbell 6 hours ago | parent | prev | next [-]

> accept disasters for democratisation

Will insurance policy coverage and premiums change when using non-deterministic software?

aleph_minus_one 5 hours ago | parent [-]

Rather: Barely any insurance company will likely be willing to insure this because of the high unpredictability and high costs in case of disasters.

chis 5 hours ago | parent | prev [-]

If Kubernetes didn't in any way reduce labor, then the 95% of large corporations that adopted it must all be idiots? I find that kinda hard to believe. It seems more likely that Kubernetes has been adopted alongside increased scale, such that sysadmin jobs have just moved up to new levels of complexity.

It seems like in the early 2000s every tiny company needed a sysadmin, to manage the physical hardware, manage the DB, custom deployment scripts. That particular job is just gone now.

vbezhenar 3 hours ago | parent | next [-]

Kubernetes enabled qualities small companies didn't dream before.

I can implement zero downtime upgrades easily with Kubernetes. No more late-day upgrades and late-night debug sessions because something went wrong, I can commit any time of the day and I can be sure that upgrade will work.

My infrastructure is self-healing. No more crashed app server.

Some engineering tasks are standardized and outsourced to the professional hoster by using managed serviced. I don't need to manage operating system updates and some component updates (including Kubernetes).

My infrastructure can be easily scaled horizontally. Both up and down.

I can commit changes to git to apply them or I can easily revert them. I know the whole history perfectly well.

I would need to reinvent half of Kubernetes before, to enable all of that. I guess big companies just did that. I never had resources for that. So my deployments were not good. They didn't scale, they crashed, they required frequent manual interventions, downtimes were frequent. Kubernetes and other modern approaches allowed small companies to enjoy things they couldn't do before. At the expense of slightly higher devops learning curve.

dijit 5 hours ago | parent | prev [-]

You’re absolutely right that sysadmin jobs moved up to new levels of complexity rather than disappeared. That’s exactly my point.

Kubernetes didn’t democratise operations, it created a new tier of specialists. But what I find interesting is that a lot of that adoption wasn’t driven by necessity. Studies show 60% of hiring managers admit technology trends influence their job postings, whilst 82% of developers believe using trending tech makes them more attractive to employers. This creates a vicious cycle: companies adopt Kubernetes partly because they’re afraid they won’t be able to hire without it, developers learn Kubernetes to stay employable, which reinforces the hiring pressure.

I’ve watched small companies with a few hundred users spin up full K8s clusters when they could run on a handful of VMs. Not because they needed the scale, but because “serious startups use Kubernetes.” Then they spend six months debugging networking instead of shipping features. The abstraction didn’t eliminate expertise, it forced them to learn both Kubernetes and the underlying systems when things inevitably break.

The early 2000s sysadmin managing physical hardware is gone. They’ve been replaced by SREs who need to understand networking, storage, scheduling, plus the Kubernetes control plane, YAML semantics, and operator patterns. We didn’t reduce the expertise required, we added layers on top of it. Which is fine for companies operating at genuine scale, but most of that 95% aren’t Netflix.

stoneforger 5 hours ago | parent [-]

All this is driven by numbers. The bigger you are, the more money they give you to burn. No one is really working solving problems, it's 99% managing complexity driven by shifting goalposts. Noone wants to really build to solve a problem, it's a giant financial circle jerk, everybody wants to sell and rinse and repeat z line must go up. Noone says stop because at 400mph hitting the breaks will get you killed.