Remix.run Logo
titzer 4 hours ago

The irony is that the vast deskilling that's happening because of this means that most "software engineers" will become incapable of understanding, let alone fixing or even building new versions of the systems that they are utterly dependent on.

There should be thousands or tens of thousands people worldwide that can build the operating systems, virtual machines, libraries, containers, and applications that AI is built on. But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us.

God I hope it doesn't all crash at once.

tuvang 4 hours ago | parent | next [-]

There is a deadly game of chicken going on. Junior recruiting already stopped for the most part. Only way this doesn’t end in a catastrophe is if AI becomes genuinely as good as the most skilled developers before we run out of them. Which I doubt very much but don’t find completely impossible.

theshrike79 4 hours ago | parent | next [-]

And the irony is that AI usage should make onboarding juniors easier.

Before it was "hey $senior_programmer where's the $thing defined in this project?", which either required a dedicated person onboarding or someone's flow was interrupted - an expected cost of bringing up juniors.

Now a properly configured AI Agent can answer that question in 60 seconds, unblocking the Junior to work on something.

And no, it doesn't mean Juniors or anyone else get to make 10k line PRs of code they haven't read nor understand. That's a very different issue that can be solved by slapping people over the head.

bragr 3 hours ago | parent [-]

The problem is that juniors given access to AI don't seem to learn as much. AI just gives them fish over and over instead of learning how to fish.

andrekandre 2 hours ago | parent | next [-]

  > The problem is that juniors given access to AI don't seem to learn as much.
i see this first-hand; they don't even know what they don't know so they circle over and over with ai leading them down rabbit holes and code that breaks in weird ways they cant even guess how to fix... stuff that if you were a real programmer you would have wrote in a few minutes let alone hours or days...
theshrike79 2 hours ago | parent | prev [-]

Yea, giving people a blank Claude with no setup will get you that.

What you could do is encourage (or force with IT's assistance) them to use a prompt (or hook or whatever) that refuses to do work for them, but instead telling them where to change and what without actually doing the work.

flir 4 hours ago | parent | prev | next [-]

Or if code quality stops mattering, in a kind of "ok, the old codebase is irretrievably sphagettified. Lets just have the chatbot extract all the requirements from it, and build a clean room version" kind of way. It's also not impossible we go that route.

4 hours ago | parent | prev [-]
[deleted]
turlockmike 4 hours ago | parent | prev | next [-]

How many kernel devs does the world need? A dozen or two?

It will be the same with software. AI will be writing and consuming most software. We will be utilizing experiences built on top of that, probably generated in real time for hyper personalization. Every app on your phone will be replaced by one app. (Except maybe games, at least for a short while longer).

Everyone's treating writing code as this reverent thing. No one wrote code 100 years ago. Very few today write assembly. It will become lost because the economic neccesity is gone.

It's the end of an era, but also the beginning of a new one. Building agentic systems is really hard, a hard enough problem that we need a ton of people building those systems. AI hardware devices have barely been registered, we need engineers who can build and integrate all sorts of systems.

Engineering as a discipline will be the last job to be automated, since who do you think is going to build all the worlds automation?

rafterydj an hour ago | parent | next [-]

How wildly dismissive of the foundation of the X$ billion dollar software industry. You think humans just stumbled into writing code by accident or something?

How does building agentic systems, a "really hard" problem, not just end up a "regular code" problem? Because that is what it is. A distributed systems problem with non-deterministic run lengths. How do you switch agent contexts? Similar to how you solve regular program context switching. How do you search tool capabilities and verify them? How do you effectively manage scheduled tasks?

Oh, look, you've just invented the operating system kernel. Suddenly, those 'dozen or two' experts don't seem so archaic after all!

vdqtp3 2 hours ago | parent | prev | next [-]

> How many kernel devs does the world need? A dozen or two?

You're low by several orders of magnitude. "The 2025 development cycle saw 2,134 developers contribute to [Linux] kernel 6.18" [1]

[1] https://commandlinux.com/statistics/linux-kernel-contributor...

oblio 28 minutes ago | parent | prev [-]

Does it even make sense to build everything on top of machines that are 70% reliable? The sheer orchestration and validation overhead at scale risks being more expensive than just keeping most software engineers and having them manage a few AI agents.

Also, 200 years ago we didn't have bike mechanics. Car mechanics. Boat mechanics. Plumbers. Electricians. Not all new professions fade away.

qsera 3 hours ago | parent | prev | next [-]

Trust me. All those people do it for the love of doing it, so I don't think they will outsource the jobs to some automation....

I have been coding long before internet and before there were huge demand for software devs..and I would be coding even after there is no demand for the same.

nicksergeant 4 hours ago | parent | prev | next [-]

I feel I've upskilled in so many directions (not just "ability to prompt LLMs") since going all in on LLM coding. So many tools, techniques, systems, and new areas of research I'd never have had the time to fully learn in the past.

I have a hard time believing any tenured developer is not actually learning things when using LLMs to build. They make interesting choices that are repeatable (new CLIs I didn't even know existed, writing scripts to churn through tricky data, using specific languages for specific tasks like Go for concurrently working through large numerous tasks, etc.)

Anyone not learning things via LLM coding right now either doesn't care at all about the underlying code/systems, or they had no foundational knowledge or interest in programming to begin with (which is also a valid way to use these tools, but they don't work very well without guidance for too long [yet]).

titzer 3 hours ago | parent | next [-]

Learning calculus by watching the professor solve integrals on the board for an hour doesn't result in the same level and depth of understanding as working through homeworks every week for a semester. If you ran off to your TA to solve every problem in your homework, you just won't learn calculus.

I've vibe coded plenty. I mostly don't look at the crap coming out. Don't want to. When I do I absorb a tiny bit, but not enough to recreate the thing from scratch. I might have a modicum more surface-level knowledge, but I don't have deep understanding and I don't have skills. To the extent that I've fixed or tweaked AI-generated code, it's not been to restructure, rearchitecture, or refactor. If this is all I did day in and day out, my entire skillset would atrophy.

nicksergeant 3 hours ago | parent [-]

"I mostly don't look at the crap coming out."

This is pretty much my point. I use LLMs to code _and_ to learn. I read everything that comes out. Half of it is wrong or incomplete. The other half saved me a bunch of time and taught me things.

Waterluvian 4 hours ago | parent | prev | next [-]

I think there's a considerable difference in its ability to help with breadth vs. depth of expertise.

tripledry 4 hours ago | parent | prev | next [-]

For me both are true at the same time.

I vividly remember understanding how calculus works after watching some 3blue1brown videos on youtube, but once I looked at some exercises I quickly realized I was not able to solve them.

Similar thing happens with LLMs and programming. Sure I understand the code but I'm not intimately familiar with it like if I programmed it "old school".

So yes, I do learn more but I can't shake the feeling that there is some dunning kruger effect going on. In essence I think that "banging my head against the wall" while learning is a key part of the learning process. Or maybe it's just me :D

mwigdahl 3 hours ago | parent [-]

It's not just you. I feel the same thing, and I saw it in practice helping my son study for a chemistry test just last night. He had worked through a bunch of problems by following the steps in his notes and got the right answers, but couldn't solve them without the notes because his comprehension of why he was taking all the steps wasn't solid.

Once we addressed that, he did great solo. Working the mechanics of the problems with the notes helped, but it was getting independent understanding of the reason for each step that put everything together for him.

zozbot234 4 hours ago | parent | prev | next [-]

What do you mean by "LLM coding"? That's not a very meaningful term, it covers everything from 100% vibe coded projects, to using the LLM to gradually flesh out a careful initial design and then verifying that the implementation is done correctly at every step with meticulous human review and checking.

nicksergeant 3 hours ago | parent [-]

The latter.

agentultra 3 hours ago | parent | prev | next [-]

> Anyone not learning things via LLM coding right now either doesn't care at all about the underlying code/systems

How many bytes is a pointer in C? How many bytes is a shared pointer in C++? What does sysctl do? What about fsync?

What is a mutex lock? How is it different from a spin lock?

You want to find the n nearest points to a given point on a 2-D Cartesian plane. Could you write the code to solve that on your own?

Can you answer any of these questions without searching for the answer?

I don't use LLMs and I learn things fine. Always have. For several decades. I care deeply about the underlying code and systems. It annoys me when people say they do and they cannot even understand how the computer works. I'm fine with people having domain-specific knowledge of programming: maybe you've only been interested in web development and scripting DOM elements. But don't pretend that your expertise in that area means you understand how to write an operating system.

Or worse: that it prevents you from learning how to write an operating system.

You can do that without an LLM. There's no royal road. You have to understand the theory, read the books, read the code, write the code, make mistakes, fix mistakes, read papers, talk to other people with more experience than you... and just write code. And rewrite it. And do it all again.

I find the opposite is true: those who use LLM coding exclusively never enjoyed programming to begin with, only learned as much as they needed to, and want the end results.

nicksergeant 3 hours ago | parent [-]

Agree with pretty much everything you wrote here, I guess with the addendum that LLMs can be a part of the learning experience you're describing. It's as easy as telling the LLM "don't write a single line of code nor command, I want to do everything, your goal is to help me understand what we're doing here."

There are always going to be people who just want the end result. The only difference now is that LLM tools allow them to get much closer to the end result than they previously were able to. And on the other side, there are always going to be people who want to _understand_ what's happening, and LLMs can help accelerate that. I use LLMs as a personalized guide to learning new things.

tpdly 44 minutes ago | parent | next [-]

I know it sounds extreme to dismiss that workflow, but I don't think people are talking enough about the subtle psychological consequences of LLM writing for this kind of thing.

In the same way that googling for an SEO article's superficial answer ends up meaning you never really bother to memorize it, "ask chat" seems to lead to never really bothering to think hard about it.

Of course I google things, but maybe I should be trying to learn in a way that minimizes the need. Maybe its important to learn how to learn in way that minimizes exposure to sycophantic average-blog-speak.

agentultra an hour ago | parent | prev [-]

Best of luck in your journey!

To those reading this thread though, be wary of the answers LLMs generate: they're plausible sounding and the LLM's are designed to be sycophants. Be wary, double check their answers to your queries against credible sources.

And read the source!

anovikov 4 hours ago | parent | prev [-]

This. I never had patience to figure how to build a from-scratch iOS app because it required too much boilerplate work. Now i do, and i got to enjoy Swift as a language, and learned a lot of iOS (and Mac) APIs.

JustResign 3 hours ago | parent [-]

But it isn't "from scratch", is it? It's "from Claude".

nicksergeant 3 hours ago | parent [-]

If you build a house from scratch but you didn't mill the lumber, did you build it from scratch?

If you make a pizza from scratch but you used canned sauce was it from scratch? What if you used store bought dough? What if you made the sauce and the dough but you didn't grow the tomato?

hnthrow0287345 4 hours ago | parent | prev | next [-]

>But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us.

That's only a brief moment in time. We learned it once, we can learn it again if we have to. People will tinker with those things as hobbies and they'll broadcast that out too. Worst case we hobble along until we get better at it. And if we have to hobble along and it's important, someone's going to be paying well for learning all of that stuff from zero, so the motivation will be there.

Why do people worry about a potential, temporary loss of skill?

doctorwho42 4 hours ago | parent | next [-]

Because they may have studied history... There are countless examples of eras of lost technology due to a stumble in society. Where those societies were never able to recover the lost "secrets" of the past. Ultimately, yes, humans can rediscover/reinvent how to do things we know are possible. But it is a very real and understandable concern that we could build a society that slowly crumbles without the ability to relearn the way to maintain the systems it relies upon, fast enough to stop it from continued degradation.

Like, yeah, you have the resources right now to boot strap your knowledge of most coding languages. But that is predicated on so many previous skills learn through out your life, adulthood and childhood. Many of which we take for granted. And ultimately AI/LLM's aren't just affecting developers, they are infecting all strata of education. So it is quite possible that we build a society that is entirely dependent on these LLM's to function, because we have offloaded the knowledge from societies collective mind... And getting it back is not as simple as sitting down with a book.

hnthrow0287345 4 hours ago | parent [-]

And we're still here right? We have more books and knowledge and capabilities than ever. Despite theoretically losing knowledge along the way, we're okay (mostly).

Society can replace the systems it relies on. The replacement might not be the best, but it'll probably handle things until we can reinvent a newer, better system. It probably won't be easy, but you can't convince me that humanity suddenly cannot adapt and fix problems right in front of them. How long does history have us doing that?

These are extraordinary claims that all of society will just become dumb and not be able to do any of this. History is also littered with people fretting about the next generation not being smart enough or whatever, and those fears rhyme pretty closely with what we're talking about here.

Tomis02 2 hours ago | parent [-]

You could have lived 200 years. But instead, people decided they'd rather invest in crypto or LLMs instead.

Maybe humans will still be here in a century. But you won't be. It didn't have to be this way.

bit-anarchist 2 hours ago | parent [-]

I don't see how they are actually exclusive in the long-term. Crypto investment isn't that big, and LLMs, or AI in general, may provide support for better treatments, thus possibly allowing people to reliably live onto 200 years.

Waterluvian 4 hours ago | parent | prev | next [-]

I imagine it being a "does anybody know COBOL?!" but much sooner than sixty years rom now.

RhysU 4 hours ago | parent [-]

COBOL also came to mind.

The COBOL thing seems to be working out just fine last I heard. Today a small number of people get paid well to know COBOL's depths and legacy platforms/software. The world moved on, where possible, to lower cost labor and tools.

Arguably, that outcome was the right creative destruction. Market economics doesn't long-term incentivize any other outcomes. We'll see the arc of COBOL play out again with LLM coding.

jerf 3 hours ago | parent | next [-]

I've been waiting for the article talking about how AI is affecting COBOL. Preferably with quotes from actual COBOL programmers since I can already theorize as well as the next guy but I'm interested in the reports from the field.

While LLMs have become pretty good at generating code, I think some of their other capabilities are still undersold and poorly understood, and one of them is that they are very good at porting. AI may offer the way out for porting COBOL finally.

You definitely can't just blindly point it at one code base and tell it to convert to another. The LLMs do "blur" the code, I find, just sort of deciding that maybe this little clause wasn't important and dropping it. (Though in some cases I've encountered this, I sometimes understand where it is coming from, when the old code was twisty and full of indirection I often as a human have a hard time being sure what is and is not used just by reading the code too...) But the process is still way, way faster than the old days of typing the new code in one line at a time by staring at the old code. It's definitely way cheaper to port a code base into a new language in 2026 than it was in 2020. In 2020 it was so expensive it was almost always not even an option. I think a lot of people have not caught up with the cost reductions in such porting actions now, and are not correctly calculating that into their costs.

It is easier than ever to get out of a language that has some fundamental issue that is hard to overcome (performance, general lack of capability like COBOL) and into something more modern that doesn't have that flaw.

jlokier 2 hours ago | parent | prev [-]

I know it's just anecdotal, but I looked for COBOL salaries a couple of years ago, curious about this "paid well".

The salaries were ok but not good for COBOL.

Here's an anecdotal Reddit thread about it. https://www.reddit.com/r/developpeurs/comments/1ixfpsx/le_sa...

FpUser 4 hours ago | parent | prev [-]

>"That's only a brief moment in time. We learned it once, we can learn it again if we have to. "

Yes we can but there is a big problem here. We will "learn it again" after something breaks. And the way the world currently functions there might not be a time to react. It is like growing food on industrial scale. We have slowly learned it over the time. If it breaks now with the knowledge gone and we have to learn it again it will end the civilization as we know it.

hnthrow0287345 4 hours ago | parent [-]

>It is like growing food on industrial scale.

How many people do you think know how to do that today? It's in the millions (probably 10s to 100s), scattered all across the globe because we all need to eat. Not to mention all of the publications on the topic in many different languages. The only credible case for everyone forgetting how to farm is nuclear doomsday and at that point we'll all be dead anyway.

>If it breaks now with the knowledge gone and we have to learn it again it will end the civilization as we know it.

I don't think there is a single piece of technology that is so critical to civilization that everyone alive easily forgets how to do it and there is also zero documentation on how it works.

These vague doomsday scenarios around losing knowledge and crashing civilization just have zero plausibility to me.

kingkawn 4 hours ago | parent | prev | next [-]

If a catastrophic failure occurs we will have to return to first principles and re-derive the solutions. Not so bad, probably enlivening even to get to spin up the mind again after a break.

cdetrio 3 hours ago | parent [-]

We found 500 zero-days in ten year old widely used open-source projects. Was that not a demonstration of the catastrophic failure of human debugging capability?

kingkawn 2 hours ago | parent [-]

And yet the world keeps turning we’ll figure it out

anon291 3 hours ago | parent | prev [-]

I mean there should be. But there's not. Despite the millions of CS grads produced many people could not reasonably be expected to produce many 'standard' parts of a software stack