Remix.run Logo
sd9 6 hours ago

The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading. I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.

To be honest, I’m looking at leaving software because the job has turned into a different sort of thing than what I signed up for.

So I think this article is partly right, Bob is not learning those skills which we used to require. But I think the market is going to stop valuing those skills, so it’s not really a _problem_, except for Bob’s own intellectual loss.

I don’t like it, but I’m trying to face up to it.

djaro 6 hours ago | parent | next [-]

> So if Bob can do things with agents, he can do things.

The problem arrises when Bob encounters a problem too complex or unique for agents to solve.

To me, it seems a bit like the difference between learning how to cook versus buying microwave dinners. Sure, a good microwave dinner can taste really good, and it will be a lot better than what a beginning cook will make. But imagine aspiring cooks just buying premade meals because "those aren't going anywhere". Over the span of years, eventually a real cook will be able to make way better meals than anything you can buy at a grocery store.

The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.

jacquesm 6 hours ago | parent | next [-]

Precisely. The first 10 rungs of the ladder will be removed, but we still expect you to be able to get to the roof. The AI won't get you there and you won't have the knowledge you'd normally gain on those first 10 rungs to help you move past #10.

NiloCK 5 hours ago | parent | next [-]

People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).

This isn't guaranteed to play out, but it should be the default expectation until we actually see greatly diminishing outputs at the frontier of science, engineering, etc.

lukev 4 hours ago | parent | next [-]

I think that's too easy an analogy, though.

Calculators are deterministically correct given the right input. It does not require expert judgement on whether an answer they gave is reasonable or not.

As someone who uses LLMs all day for coding, and who regularly bumps against the boundaries of what they're capable of, that's very much not the case. The only reason I can use them effectively is because I know what good software looks like and when to drop down to more explicit instructions.

II2II 4 hours ago | parent | next [-]

> Calculators are deterministically correct

Calculators are deterministic, but they are not necessarily correct. Consider 32-bit integer arithmetic:

  30000000 * 1000 / 1000
  30000000 / 1000 * 1000
Mathematically, they are identical. Computationally, the results are deterministic. On the other hand, the computer will produce different results. There are many other cases where the expected result is different from what a computer calculates.
wongarsu 4 hours ago | parent | next [-]

A good calculator will however do this correctly (as in: the way anyone would expect). Small cheap calculators revert to confusing syntax, but if you pay $30 for a decent handheld calculator or use something decent like wolframalpha on your phone/laptop/desktop you won't run into precision issues for reasonable numbers.

Ifkaluva 3 hours ago | parent [-]

He’s not talking about order of operations, he’s talking about floating point error, which will accumulate in different ways in each case, because floating point is an imperfect representation of real numbers

wongarsu 7 minutes ago | parent | next [-]

I didn't consider it an order of operations issue. Order of operations doesn't matter in the above example unless you have bad precision. What I was trying to say is that good calculators have plenty of precision.

II2II an hour ago | parent | prev | next [-]

Yeap, the specific example wasn't important. I choose an example involving the order of operations and an integer overflow simply because it would be easy to discuss. (I have been out of the field for nearly 20 years now.) Your example of floating point errors is another. I also encountered artifacts from approximations for transcendental functions.

Choosing a "better" language was not always an option, at least at the time. I was working with grad students who were managing huge datasets, sometimes for large simulations and sometimes from large surveys. They were using C. Some of the faculty may have used Fortran. C exposes you the vulgarities of the hardware, and I'm fairly certain Fortran does as well. They weren't going to use a calculator for those tasks, nor an interpreted language. Even if they wanted to choose another language, the choice of languages was limited by the machines they used. I've long since forgotten what the high performance cluster was running, but it wasn't Linux and it wasn't on Intel. They may have been able to license something like Mathematica for it, but that wasn't the type of computation they were doing.

skydhash 42 minutes ago | parent | prev [-]

But floating point error manifest in different ways. Most people only care about 2 to 4 decimals which even the cheapest calculators can do well for a good amount of consecutive of usual computations. Anyone who cares about better precision will choose a better calculator. So floating point error is remediable.

anthk 3 hours ago | parent | prev [-]

Good languages with proper number towers will deal with both cases in equal terms.

yunwal 4 hours ago | parent | prev [-]

Determinism just means you don't have to use statistics to approach the right answer. It's not some silver bullet that magically makes things understandable and it's not true that if it's missing from a system you can't possibly understand it.

lukev 4 hours ago | parent [-]

That's not what I mean.

If I use a calculator to find a logarithm, and I know what a logarithm is, then the answer the calculator gives me is perfectly useful and 100% substitutable for what I would have found if I'd calculated the logarithm myself.

If I use Claude to "build a login page", it will definitely build me a login page. But there's a very real chance that what it generated contains a security issue. If I'm an experienced engineer I can take a quick look and validate whether it does or whether it doesn't, but if I'm not, I've introduced real risk to my application.

threatofrain 4 hours ago | parent [-]

Those two tasks are just very different. In one world you have provided a complete specification, such as 1 + 1, for which the calculator responds with some answer and both you and the machine have a decidable procedure for judging answers. In another world you have engaged in a declaration for which the are many right and wrong answers, and thus even the boundaries of error are in question.

It's equivalent to asking your friend to pick you up, and they arrive in a big vs small car. Maybe you needed a big car because you were going to move furniture, or maybe you don't care, oops either way.

lukev 3 hours ago | parent [-]

Yes. That is the point I was making.

Calculators provide a deterministic solution to a well-defined task. LLMs don't.

didgetmaster 4 hours ago | parent | prev | next [-]

If you hand a broken calculator to someone who knows how to do math, and they entered 123 + 765 which produced an answer of 6789; they should instantly know something is wrong. Hand that calculator to someone who never understood what the tool actually did but just accepted whatever answer appeared; and they would likely think the answer was totally reasonable.

Catching an LLM hallucinating often takes a basic understanding of what the answer should look like before asking the question.

abustamam 2 hours ago | parent [-]

One time when I was a kid I was playing with my older sister's graphing calculator. I had accidentally pressed the base button and now was in hex mode. I did some benign calculation like 10+10 and got 14. I believed it!

I went to school the next day and told my teacher that the calculator says that 10+10 is 14, so why does she say it's 20?

So she showed me on her calculator. She pressed the hex button and explained why it was 14.

I think a major problem with people's usage of LLMs is that they stop at 10+10=14. They don't question it or ask someone (even the LLM) to explain the answer.

saltcured 40 minutes ago | parent [-]

Totally on a tangent here, but what kind of calculator would have a hex mode where the inputs are still decimal and only the output is hex..?

ThrowawayR2 3 hours ago | parent | prev | next [-]

The calculator analogy is wrong for the same reason. Knowing and internalizing arithmetic, algebra, and the shape of curves, etc. are mathematical rungs to get to higher mathematics and becoming a mathematician or physicist. You can't plug-and-chug your way there with a calculator and no understanding.

The people who make the calculator analogy are already victims of the missing rung problem and they aren't even able to comprehend what they're lacking. That's where the future of LLM overuse will take us.

Wowfunhappy 3 hours ago | parent | prev | next [-]

> People would have said the same about graphing calculators or calculators before that.

As it happens, we generally don't let people use calculators while learning arithmetic. We make children spend years using pencil and paper to do what a calculator could in seconds.

yoyohello13 2 hours ago | parent [-]

This is why I don’t understand the calculator analogy. Letting beginners use LLMs is like if we gave kids calculators in 1st grade and told Timmy he never needs to learn 2 + 2. That’s not how education works today.

Wowfunhappy 2 hours ago | parent [-]

I think this is exactly why calculators are a great analogy, and a hint toward how we should probably treat LLMs.

Jensson 4 hours ago | parent | prev | next [-]

> People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

Well, we still make people calculate manually for many years, and we still make people listen to lectures instead of just reading.

But will we still have people to go through years of manual coding? I guess in the future we will force them, at least if we want to keep people competent, just like the other things you mentioned. Currently you do that on the job, in the future people wont do that on the job so they will be expected to do it as a part of their education.

nothrabannosir 3 hours ago | parent | prev | next [-]

What do people mean exactly when they bring up “Socrates saying things about writing”? Phaedrus?

> “Most ingenious Theuth, one man has the ability to beget arts, but the ability to judge of their usefulness or harmfulness to their users belongs to another; [275a] and now you, who are the father of letters, have been led by your affection to ascribe to them a power the opposite of that which they really possess.

> "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

Sounds to me like he was spot on.

NiloCK 3 hours ago | parent [-]

But did this grind humanity to a halt?

Yes - specific faculties atrophied - I wouldn't dispute it. But the (most) relevant faculties for human flourishing change as a function of our tools and institutions.

nothrabannosir 2 hours ago | parent [-]

Someone brought up Socrates upthread:

> People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

If the conclusion now becomes “actually, Socrates was correct but it wasn’t that bad”, then why bring up Socrates in the first place?

II2II 4 hours ago | parent | prev | next [-]

> The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).

In a sense, I think you are right. We are currently going through a period of transition that values some skills and devalues others. The people who see huge productivity gains because they don't have to do the meaningless grunt work are enthusiastic about that. The people who did not come up with the tool are quick to point out pitfalls.

The thing is, the naysayers aren't wrong since the path we choose to follow will determine the outcome of using the technology. Using it to sift through papers to figure out what is worth reading in depth is useful. Using it to help us understand difficult points in a paper is useful. On the other hand, using it as a replacement for reading the papers is counterproductive. It is replacing what the author said with what a machine "thinks" an author said. That may get rid of unnecessary verbosity, but it is almost certainly stripping away necessary details as well.

My university days were spent studying astrophysics. It was long ago, but the struggles with technology handling data were similar. There were debates between older faculty who were fine with computers, as long as researchers were there to supervise the analysis every step of the way, and new faculty, who needed computers to take raw data to reduced results without human intervention. The reason was, as always, productivity. People could not handle the massive amounts of data being generated by the new generation of sensors or systematic large scale surveys if they had to intervene any step of the way. At a basic level, you couldn't figure out whether it was a garbage-in, garbage-out type scenario because no one had the time to look at the inputs. (I mean no time in an absolute sense. There was too much data.) At a deeper level, you couldn't even tell if the data processing steps were valid unless there was something obviously wrong with the data. Sure, the code looked fine. If the code did what we expected of it, mathematically, it would be fine. But there were occasions where I had to point out that the computer isn't working how they thought it was.

It was a debate in which both sides were right. You couldn't make scientific progress at a useful pace without sticking computers in the middle and without computers taking over the grunt work. On the other hand, the machine cannot be used as a replacement for the grunt work of understanding, may that involves reading papers or analyzing the code from the perspective of a computer scientist (rather than a mathematician).

compass_copium 4 hours ago | parent | prev | next [-]

We still expect high school students to learn to use graph paper before they use their TI-83, grade school students to do arithmetic by hand before using a calculator. This is essentially the post's point, that LLMs are a useful tool only after you have learned to do the work without them.

2 hours ago | parent | prev | next [-]
[deleted]
beepbooptheory 4 hours ago | parent | prev [-]

Socrates does not say this about the written word. Plato has Socrates say it about writing in the beginning sections of the Phaedrus, but it is not Socrates opinion nor the final conclusion he arrives at.

And yes yes you can pull up the quote or ask your AI, but they will be wrong. The quote is from Socrates reciting a "myth", as is pretty typical in a middle late dialogue like this.

But here, alas we can recognize the utter absurdity, that this just points out why writing can be bad, as Socrates does pose. Because you get guys 2000 years in future using you and misquoting you for their dumb cause! No more logos, only endless stochastic doxa. Truly a future of sophists!

threatofrain 5 hours ago | parent | prev | next [-]

But AI might actually get you there in terms of superior pedagogy. Personal Q&A where most individuals wouldn't have afforded it before.

wongarsu 4 hours ago | parent | prev | next [-]

There are a lot of people in academia who are great at thinking about complex algorithms but can't write maintainable code if their life depended on it. There are ways to acquire those skills that don't go the junior developer route. Same with debugging and profiling skills

But we might see a lot more specialization as a result

cmiles74 3 hours ago | parent | next [-]

Do they need to write maintainable code? I think probably not, it's the research and discovering the new method that is important.

iterateoften 4 hours ago | parent | prev [-]

They can’t write maintainable code because they don’t have real world experience of getting your hands dirty in a company. The only way to get startup experience is to build a startup or work for one

wongarsu 4 hours ago | parent | next [-]

Duh, the only way to get startup experience is indeed to get startup experience.

My point is that getting into the weeds of writing CRUD software is not the only way to gain the ability to write complex algorithms, or to debug complex issues, or do performance optimization. It's only common because the stuff you make on the journey used to be economically valuable

iterateoften 2 hours ago | parent [-]

> write complex algorithms, or to debug complex issues, or do performance optimization

That’s the stuff that ai is eating. The stuff I’m talking about (scaling orgs, maintaining a project long term, deciding what features to build or not build etc) is stuff very hard for ai

8note 2 hours ago | parent [-]

I dont know if id call it "hard for ai" so much as "untreaded ground"

agents might be better at it than people are, given the right structure

tovej 4 hours ago | parent | prev [-]

What. Are you saying maintainable code is specifically related to startups? I can accept companies as an answer (although there are other places to cut your teeth), but startups is a weird carveout.

Jensson 3 hours ago | parent [-]

Writing maintainable code is learned by writing large codebases. Working in an existing codebase doesn't teach you it, so most people working at large companies do not build the skill since they don't build many large new projects. Some do but most don't. But at startups you basically have to build a big new codebase.

omega3 5 hours ago | parent | prev [-]

That’s a good analogy but I think we’ve already went from 0 to 10 rungs over the last couple of years. If we assume that the models or harnesses will improve more and more rungs will be removed. Vast majority of programmers aren’t doing novel, groundbreaking work.

skippyboxedhero 4 hours ago | parent | prev | next [-]

The correct distinction is: if you can't do something without the agent, then you can't do it.

The problem that the author describes is real. I have run into it hundreds of times now. I will know how to do something, I tell AI to do it, the AI does not actually know how to do it at a fundamental level and will create fake tests to prove that it is done, and you check the work and it is wrong.

You can describe to the AI to do X at a very high-level but if you don't know how to check the outcome then the AI isn't going to be useful.

The story about the cook is 100% right. McDonald's doesn't have "chefs", they have factory workers who assemble food. The argument with AI is that working in McDonald's means you are able to cook food as well as the best chef.

The issue with hiring is that companies won't be able to distinguish between AI-driven humans and people with knowledge until it is too late.

If you have knowledge and are using AI tools correctly (i.e. not trying to zero-shot work) then it is a huge multiplier. That the industry is moving towards agent-driven workflows indicates that the AI business is about selling fake expertise to the incompetent.

klabb3 2 hours ago | parent | prev | next [-]

> The problem arrises when Bob encounters a problem too complex or unique for agents to solve.

It’s actually worse than that: the AI will not stop and say ”too complex, try in a month with the next SOTA model”. Rather, it will give Bob a plausible looking solution that Bob cannot identify as right or wrong. If Bob is working on an instant feedback problem, it’s ok: he can flag it, try again, ask for help. But if the error can’t be detected immediately, it can come back with a vengeance in a year. Perhaps Bob has already gotten promoted by then, and Bobs replacement gets to deal with it. In either case, Bob cannot be trusted any more than the LLM itself.

raldi 5 hours ago | parent | prev | next [-]

To me it feels more like learning to cook versus learning how to repair ovens and run a farm. Software engineering isn’t about writing code any more than it’s about writing machine code or designing CPUs. It’s about bringing great software into existence.

victorbjorklund 4 hours ago | parent [-]

Or farming before and after agricultural machines. The principles are the same but the ”tactical” stuff are different.

roenxi 6 hours ago | parent | prev | next [-]

That doesn't sound like much of an issue. Bob was already going to encounter problems that are too large and complex for him to solve, agents or otherwise. Life throws us hard problems. I don't recall if we even assumed Bob was unusually capable, he might be one of life's flounderers. I'd give good odds that if he got through a program with the help of agents he'll get through life achieving at least a normal level of success.

But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs. At the point, Bob may discover that anything agents can't do, Alice can't do because she is limited by trying to think using soggy meat as opposed to a high-performance engineered thinking system. Not going to win that battle in the long term.

> The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.

The market values bulldozers. Whether a human does actual work or not isn't particularly exciting to a market.

kelnos 5 hours ago | parent | next [-]

> we're trending towards superintelligence with these AIs

The article addresses this, because, well... no we aren't. Maybe we are. But it's far from clear that we're not moving toward a plateau in what these agents can do.

> Whether a human does actual work or not isn't particularly exciting to a market.

You seem to be convinced these AI agents will continue to improve without bound, so I think this is where the disconnect lies. Some of us (including the article author) are more skeptical. The market values work actually getting done. If the AIs have limits, and the humans driving them no longer have the capability to surpass those limits on their own, then people who have learned the hard way, without relying so much on an AI, will have an advantage in the market.

I already find myself getting lazy as a software developer, having an LLM verify my work, rather than going through the process of really thinking it through myself. I can feel that part of my skills atrophying. Now consider someone who has never developed those skills in the first place, because the LLM has done it for them. What happens when the LLM does a bad job of it? They'll have no idea. I still do, at least.

Maybe someday the AIs will be so capable that it won't matter. They'll be smarter and more through and be able to do more, and do it correctly, than even the most experienced person in the field. But I don't think that's even close to a certainty.

zozbot234 4 hours ago | parent | next [-]

There's no good definition of superintelligence. A calculator is already way more capable than any human at doing simple mathematical operations, and even small AIs for local use can instantly recall all sorts of impressive knowledge about virtually any field of study, which would be unfeasible for any human; but neither of those is what people mean when they wonder whether future AIs will have superintelligence.

Jensson 4 hours ago | parent [-]

General superintelligence is more well defined, I assume that is what he meant. When I hear superintelligence I assume they just mean general superintelligence as in its better than humans at every single mental task that exists.

dryarzeg 5 hours ago | parent | prev [-]

> But it's far from clear that we're not moving toward a plateau in what these agents can do.

It is a debatable topic, and I agree with you that it's unclear whether we will hit the wall or not at some point. But one point I want to mention is that at the time when the AI agents were only conceived and the most popular type of """AI""" was LLM-based chatbot, it also seemed that we're approaching some kind of plateau in their performance. Then "agents" appeared, and this plateau, the wall we're likely to hit at some point, the boundary was pushed further. I don't know (who knows at all?) how far away we can push the boundaries, but who knows what comes next? Who knows, for example, when a completely new architecture different from Transformers will come out and be adopted everywhere, which will allow for something new? Future is uncertain. We may hit the wall this year, or we may not hit it in the next 10-20 years. It is, indeed, unclear.

bee_rider 4 hours ago | parent [-]

Are agents something special? We already had LLMs that could call tools. Agents are just that, in a loop, right?

dryarzeg 4 hours ago | parent [-]

Roughly speaking - yes. Still, it's an advancement - even if it's a small one - on the usual chatbots, right?

P.S. I am well aware of all of the risks that agents brought. I'm speaking in terms of pure "maximum performance", so to speak.

dandellion 5 hours ago | parent | prev | next [-]

> we're trending towards superintelligence with these AIs

I wouldn't count on that because even if it happens, we don't know when it ill happen, and it's one of those things where how close it looks to be is no indication of how close it actually is. We could just as easily spend the next 100 years being 10 years away from agi. Just look at fusion power, self driving cars, etc.

CuriouslyC 5 hours ago | parent [-]

Fusion isn't a good example. Self driving cars are a battle between regulation and 9's of reliability, if we were willing to accept self driving cars that crashed as much as humans it'd be here already.

Whatever models suck at, we can pour money into making them do better. It's very cut and dry. The squirrely bit is how that contributes to "general intelligence" and whether the models are progressing towards overall autonomy due to our changes. That mostly matters for the AGI mouthbreathers though, people doing actual work just care that the models have improved.

b00ty4breakfast 5 hours ago | parent | prev | next [-]

>But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs

do you have any evidence for that, though? Besides marketing claims, I mean.

roenxi 5 hours ago | parent [-]

I've always quite liked https://ourworldindata.org/grapher/test-scores-ai-capabiliti... to show that once AIs are knocking at the door of a human capability they tend to overshoot in around a decade.

b00ty4breakfast 3 hours ago | parent | next [-]

we have to look at what LLMs are and are not doing for this to be applicable; they are not "thinking", there is no real cognition going on inside an LLM. They are making statistical connections between data points in their training sets. Obviously, that has born some pretty interesting (and sometimes even useful) results but they are not doing anything that any reasonably informed person would call "intelligent" and certainly not "super intelligent".

Lionga 5 hours ago | parent | prev [-]

This is just trash, like almost any AI benchmark. E.g. it says since around 2015 speech recognition is above human yet any any speech input today has more errors than any human would have.

If I would not type but speak this comment maybe 2 to 5 words would be wrong. For a human it is maybe 10% of that.

whateveracct 2 hours ago | parent | prev | next [-]

> That doesn't sound like much of an issue. Bob was already going to encounter problems that are too large and complex for him to solve, agents or otherwise.

I have literally never run into this in my career..challenges have always been something to help me grow.

ozim 3 hours ago | parent | prev | next [-]

Market values bulldozers for bulldozing jobs. No one is going to use bulldozers to mow a lawn.

If Bob is going to spend $500 in tokens for something I can do for $50.

I think Bob is not going to stay long in lawn mowing market driving a bulldozer.

mattmanser 5 hours ago | parent | prev | next [-]

The authors point went a little over your head.

It doesn't matter if Bob can be normal. There was no point to him being paid to be on the program.

From the article:

If you hand that process to a machine, you haven't accelerated science. You've removed the only part of it that anyone actually needed.

lelanthran 5 hours ago | parent | next [-]

> It doesn't matter if Bob can be normal. There was no point to him being paid to be on the program.

Yeah, I'm surprised at the number of people who read the article and came away with the conclusion that the program was designed to churn deliverables, and then they conclude that it doesn't matter if Bob can only function with an AI holding his hand, because he can still deliver.

That isn't the output of the program; the output is an Alice. That's the point of the program. They don't want the results generated by Alice, they want the final Alice.

alex_suzuki 4 hours ago | parent | next [-]

It’s a fairly long article, maybe they had it summarized and came to that conclusion…

4 hours ago | parent | prev [-]
[deleted]
SoftTalker 2 hours ago | parent | prev [-]

And then you realize that most of science is unnecessary. As TFA points out, it doesn't matter if the age of the universe is 13.77 or 13.79 billion years. So you ban AI in science, you produce more scientists who can solve problems that don't matter. So what?

uoaei 5 hours ago | parent | prev | next [-]

"Things that have never been done before in software" has been my entire career. A lot of it requires specific knowledge of physics, modelling, computer science, and the tradeoffs involved in parsimony and efficiency vs accuracy and fidelity.

Do you have a solution for me? How does the market value things that don't yet exist in this brave new world?

ModernMech 3 hours ago | parent | prev | next [-]

> Not going to win that battle in the long term.

I would take that bet on the side of the wet meat. In the future, every AI will be an ad executive. At least the meat programming won't be preloaded to sell ads every N tokens.

wizzwizz4 5 hours ago | parent | prev [-]

From the article:

> There's a common rebuttal to this, and I hear it constantly. "Just wait," people say. "In a few months, in a year, the models will be better. They won't hallucinate. They won't fake plots. The problems you're describing are temporary." I've been hearing "just wait" since 2023.

We're not trending towards superintelligence with these AIs. We're trending towards (and, in fact, have already reached) superintelligence with computers in general, but LLM agents are among the least capable known algorithms for the majority of tasks we get them to do. The problem, as it usually is, is that most people don't have access to the fruits of obscure research projects.

Untrained children write better code than the most sophisticated LLMs, without even noticing they're doing anything special.

jnovek 4 hours ago | parent [-]

The rate of hallucination has gone down drastically since 2023. As LLM coding tools continue to pare that rate down, eventually we’ll hit a point where it is comparable to the rate we naturally introduce bugs as humans programmers.

wizzwizz4 3 hours ago | parent [-]

LLMs are still making fundamentally the same kinds of errors that they made in 2021. If you check my HN comment history, you'll see I predicted these errors, just from skimming the relevant academic papers (which is to say they're obvious: I'm far from the only person saying this). There is no theoretical reason we should expect them to go away, unless the model architectures fundamentally change (and no, GPT -> LLaMA is not a fundamental change), because they're not removable discontinuities: they're indicative of fundamental capability gaps.

I don't care how many terms you add to your Taylor series: your polynomial approximation of a sine wave is never going to be suitable for additive speech synthesis. Likewise, I don't care how good your predictive-text transformer model gets at instrumental NLP subtasks: it will never be a good programmer (except as far as it's a plagiarist). Just look at the Claude Code source code: if anyone's an expert in agentic AI development, it's the Claude people, and yet the codebase is utterly unmaintainable dogshit that shouldn't work and, on further inspection, doesn't work.

That's not to say that no computer program can write computer programs, but this computer program is well into the realm of diminishing returns.

jnovek 4 hours ago | parent | prev | next [-]

How many people who cook professionally are gourmet chefs? I think it ends up that gourmet cooking is so infrequently needed that we don’t require everyone who makes food to do it, just a small group of professionally trained people. Most people who make food for a living work somewhere like McDonald’s and Applebee’s where a high level of skill is not required.

There will still be programming specialists in the future — we still have assembly experts and COBOL experts, after all. We just won’t need very many of them and the vast majority of software engineers will use higher-level tools.

ThrowawayR2 3 hours ago | parent [-]

That's the problem though: programmers who become the equivalent of McDonald's workers will be paid poorly like McDonald's workers and be treated as disposable like McDonald's workers.

cfloyd 4 hours ago | parent | prev | next [-]

I held this point of view for a while but I came to the (possibly naive) conclusion that it was just forced self-assurance. Truth is, the issues with sub-par output are just a prompting and supervision deficiency. An agent team can produce better end product if supervised and promoted correctly. The issue is most don’t take the time to do that. I’m not saying I like that this is true, quite the opposite. It is the reality of things now.

vrganj 4 hours ago | parent [-]

At some point the herding of idiot savants becomes more work than just doing the damn thing yourself in the first place.

lxgr 4 hours ago | parent [-]

I'm happy to herd idiots all my life if they come out of it smarter than they went in. The real tragedy with current LLM agents is that they're effectively stateless, and so all the effort of "educating" them feels wasted.

Once continuous learning is solved, I predict the problem addressed by TFA to become orders of magnitude bigger: What's the motivation for anyone to teach a person if an LLM can learn it much faster, will work for you forever, and won't take any sick days or consider changing careers?

vrganj 3 hours ago | parent [-]

At that point, I think it'll be time to admit to ourselves that capitalism is over.

The only reason we somewhat made it work is due to the interdependence between labor and capital. Once that's broken, the wheels will start falling off.

CuriouslyC 5 hours ago | parent | prev | next [-]

Just because Bob doesn't know e.g. Rust syntax and library modules well, doesn't mean that Bob can't learn an algorithm to solve a difficult problem. The AI might suggest classes of algorithms that could be applicable given the real world constraints, and help Bob set up an experimental plan to test different algorithms for efficacy in the situation, but Bob's intuition is still in the drivers's seat.

Of course, that assumes a Bob with drive and agency. He could just as easily tell the AI to fix it without trying to stay in the loop.

bigfishrunning 4 hours ago | parent [-]

But if Bob doesn't know rust syntax and library modules well, how can he be expected to evaluate the output generating Rust code? Bugs can be very subtle and not obvious, and Rust has some constructs that are very uncommon (or don't exist) in other languages.

Human nature says that Bob will skim over and trust the parts that he doesn't understand as long as he gets output that looks like he expects it to look, and that's extremely dangerous.

ndriscoll 4 hours ago | parent [-]

Then perhaps Bob should have it use functional Scala, where my experience is that if it compiles and looks like what you expect, it's almost certainly correct.

bigfishrunning 3 hours ago | parent [-]

Sure, but bob is very unlikely to do that unless his AI tool of choice suggests it.

bitwize 4 hours ago | parent | prev | next [-]

Bob+agents is going to be able to solve much more complex problems than Bob without agents.

That's the true AI revolution: not the things it can accelerate, the things it can put in reach that you wouldn't countenance doing before.

b112 6 hours ago | parent | prev | next [-]

Worse, soon fewer and fewer people will taste good food, including even higher and higher scale restaurants just using pre-made.

As fewer know what good food tastes like, the entire market will enshitify towards lower and lower calibre food.

We already see this with, for example, fruits in cold climates. I've known people who have only ever bought them from the supermarket, then tried them at a farmers when they're in season for 2 weeks. The look of astonishment on their faces, at the flavour, is quite telling. They simply had no idea how dry, flavourless supermarket fruit is.

Nothing beats an apple picked just before you eat it.

(For reference, produce shipped to supermarkets is often picked, even locally, before being entirely ripe. It last longer, and handles shipping better, than a perfectly ripe fruit.)

The same will be true of LLMs. They're already out of "new things" to train on. I question that they'll ever learn new languages, who will they observe to train on? What does it matter if the code is unreadable by humans regardless?

And this is the real danger. Eventually, we'll have entire coding languages that are just weird, incomprehensible, tailored to LLMs, maybe even a language written by an LLM.

What then? Who will be able to decipher such gibberish?

Literally all true advancement will stop, for LLMs never invent, they only mimic.

CuriouslyC 5 hours ago | parent [-]

Ironically, apples are one of the fruits where tree ripening isn't a big deal for a lot of varietals. You should have used tomato as the example, the difference there is night and day pretty much across the board.

If humans can prove that bespoke human code brings value, it'll stick around. I expect that the cases where this will be true will just gradually erode over time.

zozbot234 5 hours ago | parent | prev [-]

Real-world cooks don't exactly avoid those newfangled microwave ovens though. They use them as a professional tool for simple tasks where they're especially suitable (especially for quick defrosting or reheating), which sometimes allows them to cook even better meals.

xantronix 3 hours ago | parent | prev | next [-]

I'm glad you've posted this comment because I strongly feel more people need to see sentiment, and push back against what many above want to become the new norm. I see capitulation and compliance in advance, and it makes me sad. I also see two very valid, antipodal responses to this phenomenon: Exit from the industry, and malicious compliance through accelerationism.

To the reader and the casual passerby, I ask: Do you have to work at this pace, in this manner? I understand completely that mandates and pressure from above may instill a primal fear to comply, but would you be willing to summon enough courage to talk to maybe one other person you think would be sympathetic to these feelings? If you have ever cared about quality outcomes, if for no other reason than the sake of personal fulfillment, would it not be worth it to firmly but politely refuse purely metrics-focused mandates?

lelanthran 5 hours ago | parent | prev | next [-]

> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

"Being able to deliver using AI" wasn't the point of the article. If it was the point, your comment would make sense.

The point of the program referred to in the article is not to deliver results, but to deliver an Alice. Delivering a Bob is a failure of the program.

Whether you think that a Bob+AI delivers the same results is not relevant to the point of the article, because the goal is not to deliver the results, it's to deliver an Alice.

sd9 5 hours ago | parent [-]

I am aware of that - I was adding something along the lines of: I don’t think people care if we deliver Alices any more.

bigfishrunning 4 hours ago | parent | next [-]

People never cared about delivering Alices; they were an implementation detail. I think the article argues that they're still an important one, but one that isn't produced automatically anymore

wiseowise 4 hours ago | parent [-]

The article is talking about science research in the context of astrophysics, not coding sweatshops.

bigfishrunning 3 hours ago | parent [-]

I was also talking about producing researchers for academia.

lelanthran 4 hours ago | parent | prev [-]

> I am aware of that - I was adding something along the lines of: I don’t think people care if we deliver Alices any more.

That's irrelevant to the goal of the program - they care. Once they stop caring, they'd shut that program down.

Maybe it would be replaced with a new program that has the goal of delivering Bobs+AI, but what would be the point? I mean, the article explained in depth that there is no market for the results currently, so what would be the point of efficiently generating those results?

The market currently does not want the results, so replacing the current program with something that produces Bobs+AI would be for... what, exactly?

sd9 4 hours ago | parent [-]

There’s no market for the results, but there was a market for Alices, because they were the only people who could produce similar results historically. Now maybe there’s less of a market for Alices. Yes, maybe that means the program disappears.

fomoz an hour ago | parent | prev | next [-]

It's the next level of abstraction. Bob is still learning, he's just learning a different set of skills than Alice.

Also, the premise that it took each of them a year to do the project means Bob was slacking because he probably could've done it in less than a month.

staindk 6 hours ago | parent | prev | next [-]

They aren't going away but for some they may become prohibitively expensive after all the subsidies end.

I do think coding with local agents will keep improving to a good level but if deep thinking cloud tokens become too expensive you'll reach the limits of what your local, limited agent can do much more quickly (i.e. be even less able to do more complex work as other replies mention).

tonfa 6 hours ago | parent [-]

> They aren't going away but for some they may become prohibitively expensive after all the subsidies end.

Even if inference was subsidized (afaik it isn't when paying through API calls, subscription plans indeed might have losses for heavy users, but that's how any subscription model typically work, it can still be profitable overall).

Models are still improving/getting cheaper, so that seems unlikely.

SlinkyOnStairs 38 minutes ago | parent | next [-]

> afaik it isn't when paying through API calls

There is no evidence for this. The claims that API is "profitable on inference" are all hearsay. Despite the fact that any AI executive could immediately dismiss the misconception by merely making a public statement beholden to SEC regulation, they don't.

> Models are still improving/getting cheaper

The diminishing returns have set in for quality, and for a while now that increased quality has come at the cost of massive increases in token burn, it's not getting cheaper.

Worse yet, we're in an energy crisis. Iran has threatened to strike critical oil infrastructure, and repairs would take years.

AI is going to get significantly more expensive, soon.

ernst_klim 5 hours ago | parent | prev [-]

It probably is still subsidized, just not as much. We won't know if these APIs are profitable unless these companies go public, and till then it's safe to bet these APIs are underpriced to win the market share.

zozbot234 4 hours ago | parent | next [-]

Third-party AI inference with open models is widely available and cheap. You're paying as much as proprietary mini-models or even less for something far more capable, and that without any subsidies (other than the underlying capex and expense for training the model itself).

CuriouslyC 5 hours ago | parent | prev | next [-]

Anthropic has shared that API inference has a ~60% margin. OpenAI's margin might be slightly lower since they price aggressively but I would be surprised if it was much different.

bigfishrunning 4 hours ago | parent [-]

Is that margin enough to cover the NRE of model development? Every pro-AI argument hinges on the models continuing to improve at a near-linear rate

tonfa 3 hours ago | parent [-]

Yeah but the argument people make is that when the music stops cost of inference goes through the roof.

I could imagine that when the music stops, advancement of new frontier models slows or stops, but that doesn't remove any curent capabilities.

(And to be fair the way we duplicate efforts on building new frontier models looks indeed wasteful. Tho maybe we reach a point later where progress is no longer started from scratch)

throwthrowuknow 5 hours ago | parent | prev [-]

Then we’ll likely know by the end of this year.

KronisLV 5 hours ago | parent | prev | next [-]

> I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading.

I dread the flip side of this which is dealing with obtuse bullshit like trying to understand why Oracle ADF won’t render forms properly, or how to optimize some codebase with a lot of N+1 calls when there’s looming deadlines and the original devs never made it scalable, or needing to dig into undercommented legacy codebases or needing to work on 3-5 projects in parallel.

Agents iterating until those start working (at least cases that are testable) and taking some of the misery and dread away makes it so that I want to theatrically defenestrate myself less.

Not everyone has the circumstance to enjoy pleasant and mentally stimulating work that’s not a frustrating slog all the time - the projects that I actually like working on are the ones I pick for weekends, I can’t guarantee the same for the 9-5.

sd9 4 hours ago | parent [-]

Oh yes, it’s an entirely privileged position to be able to enjoy your work. But it’s a privilege I have enjoyed and not one I want to give up unless I have to. We spend an extraordinary amount of our waking life at work.

KronisLV 4 hours ago | parent [-]

I do hope you can find a set of circumstances that don't make you give it up too much. And hey, if you end up moving to another line of work than software, no reason why you couldn't still enjoy working on whatever project you want over the weekend, too.

klabb3 2 hours ago | parent | prev | next [-]

> So if Bob can do things with agents, he can do things.

Yes, but how does he know if it worked? If you have instant feedback, you can use LLMs and correct when things blow up. In fact, you can often try all options and see which works, which makes it ”easy” in terms of knowledge work. If you have delayed feedback, costly iterations, or multiple variables changing underneath you at all times, understanding is the only way.

That’s why building features and fixing bugs is easy, and system level technical decision making is hard. One has instant feedback, the other can take years. You could make the ”soon” argument, but even with better models, they’re still subject to training data, which is minimal for year+ delayed feedback and multivariate problems.

ozim 3 hours ago | parent | prev | next [-]

There is still a lot of engineering to be done with LLMs. Maybe not exactly writing code but I think a lot of optimization problems will be there no matter what.

Some people treat toilet as magic hole where they throw stuff in flush and think it is fine.

If you throw garbage in you will at some point have problems.

We are in stage where people think it is fine to drop everything into LLM but then they will see the bill for usage and might be surprised that they burned money and the result was not exactly what they expected.

coffeefirst 3 hours ago | parent [-]

Yep. I hate to predict the future but I’m betting on small, open models, used as tools here and there. Which is great, you can get 90% of the speed up with 5-10% of the cost once you account for how time consuming it is to make sense of and fix the output.

The economics and security model on full agents running in loops all day may come home to roost faster than expertise rot.

lxgr 3 hours ago | parent | prev | next [-]

> if Bob can do things with agents, he can do things.

This point is directly addressed in the paper: Bob will ultimately not be able to do the things Alice can, with or without agents, because he didn't build the necessary internal deep structure and understanding of the problem space.

And if Alice later on ends up being a better scientist (using agents!) than Bob will ever be, would you not say there was something lost to the world?

Learning needs a hill to climb, and somebody to actually climb it. Bob only learned how to press an elevator button.

michaelcampbell 3 hours ago | parent | prev | next [-]

> I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading. I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.

I am in the same boat, but close enough to retirement that I'm less "scared" about it. For me I'm moving up the chain; not people management, but devoting a lot more of my time up the abstraction continuum. Looking a lot more at overall designs and code quality and managing specs and inputs and requirements.

I wrote some design docs past few days for a big project the team is embarking on. We never had that before, at least not in the level of detail (per time quantum) that I was able to produce. Used 2 models from 2 companies - one to write, one to review, and bounce between them until the 3 of us agree.

Honestly it didn't take any less time than I would have done it alone, but the level of detail was better, and covered more edge cases. Calling it a "win" right now. I still enjoy it, as most of the code I/we was/are writing is mostly fancy CRUD anyway, and doesn't have huge scaling problems to solve (and too few devs I feel are being honest about their work, here).

qsera 5 hours ago | parent | prev | next [-]

>The thing is, agents aren’t going away...

Aren't they currently propped up by investor money?

What happens when the investors realize the scam that it is and stop investing or start investing less...

samusiam 5 hours ago | parent [-]

> Aren't they currently propped up by investor money?

Are Chinese model shops propped up by investor money? Is Google?

Open weights models are only 6 months behind SOTA. If new model development suddenly stopped, and today's SOTA models suddenly disappeared, we would still have access to capable agents.

qsera 5 hours ago | parent | next [-]

>we would still have access to capable agents.

But they would be outdated, right?

Would an agent that can only code in COBOL would be as useful today?

iugtmkbdfil834 4 hours ago | parent | next [-]

By six months. Surely, non-SOTA models can eventually get not outdated. And your argument ignores 'new model development suddenly stopped' aspect. If it stops, there is nothing be to be outdated to..

lxgr 3 hours ago | parent | prev [-]

> But they would be outdated, right?

Outdated compared to what? In your counterfactual, VC funded agents don't exist anymore, no?

Your argument, if I understand it correctly, is that they might somehow go away entirely when VC funding dries up, when more realistically they'll probably at most become twice as expensive or regress half a year in performance.

Jensson 3 hours ago | parent [-]

> Outdated compared to what? In your counterfactual, VC funded agents don't exist anymore, no?

Outdated compared to reality / humans, their knowledge cutoff is a year further behind every year they don't get updates. Humans continuously expands their knowledge, the models needs to keep up with that.

loeg 4 hours ago | parent | prev [-]

Well, the Chinese shops are propped up by the CCP instead.

samusiam 4 hours ago | parent [-]

That's true, but the "AI bubble bursts" scenario is usually tied to Western investors getting essentially margin-called. If that happens, the CCP won't suddenly stop their investment; Chinese models will most likely continue developing.

asHg19237 5 hours ago | parent | prev | next [-]

Many things have come and gone in this fashion oriented industry. Everyone is already bored to hell by AI output.

AI in software engineering is kept afloat by the bullshitters who jump on any new bandwagon because they are incompetent and need to distract from that. Managers like bullshit, so these people thrive for a couple of years until the next wave of bullshit is fashionable.

QuantumNomad_ 5 hours ago | parent | prev | next [-]

> if Bob can do things with agents, he can do things

I’ve been reminded lately of a conversation I had with a guy at hacker space cafe around ten years ago in Berlin.

He had been working as a programmer for a significantly longer time than me. Long enough that for many years of his career, he had been programming in assembly.

He was lamenting that these days, software was written in higher level languages, and that more and more programmers no longer had the same level of knowledge about the lower level workings of computers. He had a valid point and I enjoyed talking to him.

I think about this now when I think about agentic coding. Perhaps over time most software development will be done without the knowledge of the higher level programming languages that we know today. There will still be people around that work in the higher level programming languages in the future, and are intimately familiar with the higher level languages just like today there are still people who work in assembly even if the percentage of people has gotten lower over time relative to those that don’t.

And just like there are areas where assembly is still required knowledge, I think there will be areas where knowledge of the programming languages we use today will remain necessary and vibe coding alone wont cut it. But the percentage of people working in high level languages will go down, relative to the number of people vibe coding and never even looking at the code that the LLM is writing.

loveparade 5 hours ago | parent | next [-]

I see these analogies a lot, but I don't like them. Assembly has a clear contract. You don't need to know how it works because it works the same way each time. You don't get different outputs when you compile the same C code twice.

LLMs are nothing like that. They are probabilistic systems at their very core. Sometimes you get garbage. Sometimes you win. Change a single character and you may get a completely different response. You can't easily build abstractions when the underlying system has so much randomness because you need to verify the output. And you can't verify the output if you have no idea what you are doing or what the output should look like.

lxgr 3 hours ago | parent [-]

I think these analogies are largely correct, but TFA is about something subtly different:

LLMs don't make it impossible to do anything yourself, but they make it economically impractical to do so. In other words, you'll have to largely provide both your own funding and your own motivation for your education, unless we can somehow restructure society quickly enough to substitute both.

With assembly, we arguably got lucky: It turns out that high-level programming languages still require all the rigorous thinking necessary to structure a programmer's mind in ways that transfer to many adjacent tasks.

It's of course possible that the same is true for using LLMs, but at least personally, something feels substantially different about them. They exercise my "people management" muscle much more than my "puzzle solving" one, and wherever we're going, we'll probably still need some puzzle solvers too.

lelanthran 5 hours ago | parent | prev | next [-]

> He had been working as a programmer for a significantly longer time than me. Long enough that for many years of his career, he had been programming in assembly.

Please, not this pre-canned BS again!

Comparing abstractions to AI is an apples to oranges comparison. Abstractions are dependable due to being deterministic. When I write a function in C to return the factorial of a number, and then reuse it again and again from Java, I don't need a damn set of test cases in Java to verify that factorial of 5 is 120.

With LLMs, you do. They aren't an abstraction, and seeing this worn out, tired and routinely debunked comparison being presented in every bloody thread is wearing a little thin at this point.

We've seen this argument hundreds of times on this very site. Repeating it doesn't make it true.

sd9 5 hours ago | parent | prev | next [-]

Lovely story, thanks for sharing.

I wonder how many assembly programmers got over it and retrained, versus moved on to do something totally different.

I find the agentic way of working simultaneously more exhausting and less stimulating. I don’t know if that’s something I’m going to get over, or whether this is the end of the line for me.

AnimalMuppet 4 hours ago | parent [-]

I wasn't there at the time, but I believe that most assembly programmers learned higher-level languages.

My mother actually started programming in octal. I don't remember her exact words, but she said something to the effect that her life got so much better when she got an assembler. I suspect that going from assembly to compilers was much the same - you no longer had to worry about register allocations and building stack frames.

ThrowawayR2 2 hours ago | parent [-]

It was a trade-off for a very long time (late 1960s to late 1990s IMO): the output of the early compilers was much less efficient than hand writing assembly language but it enabled less skilled programmers to produce working programs. Compilers pulled ahead when eventually processor ISAs evolved to optimize executing compiler generated code (e.g. the CISC -> RISC transition) and optimizing compilers became practical because of more powerful hardware. It definitely was not an overnight transformation.

jurgenburgen 3 hours ago | parent | prev [-]

The difference is that you don’t need to review the machine code produced by a compiler.

The same is not true for LLM output. I can’t tell my manager I don’t know how to fix something in production the agent wrote. The equivalent analogy would be if we had to know both the high-level language _and_ assembly.

torben-friis 5 hours ago | parent | prev | next [-]

Can you run an industry level LLM at home?

If not, you're changing learning to cook for Uber only meals.

And since the alternative is starving, Uber will boil the pot.

Don't give up your self sufficiency.

zozbot234 4 hours ago | parent | next [-]

> Can you run an industry level LLM at home?

Assuming that by "at home" you mean using ordinary hardware, not something that costs as much as a car. Yes, very slowly, for simple tests. (Not proprietary models obviously, but quite capable ones nonetheless.) Not exactly viable for agentic coding that needs boatloads of tokens for the simplest things. But then you can run smaller local models that are still quite capable for many things.

sd9 5 hours ago | parent | prev | next [-]

I’m very good at the handcrafted stuff, I’ve been doing this a while. I don’t feel like giving up my self sufficiency, I just feel like the writing is on the wall.

torben-friis 5 hours ago | parent [-]

By "you" I actually meant this hypothetical person who's only good enough for AI assisted. Though even for us who are already experienced, we should keep the manual stuff even if it's just as going to the gym. I don't see myself retaining my skills for long by just reviewing LLM output.

sd9 5 hours ago | parent [-]

Yes sorry, I didn’t think you were addressing me directly, just adding my own thoughts.

I agree totally with the sentiment, and I am concerned about my own skills atrophying.

loeg 4 hours ago | parent | prev | next [-]

The costs just aren't that high. They could be 10x higher and it still wouldn't be a huge deal.

Almondsetat 5 hours ago | parent | prev [-]

Can you build a computer at home?

There is absolutely nothing self-sufficient about computer hardware

jappgar 4 hours ago | parent | next [-]

Or generate electricity? Or grow enough food to survive? Medicines?

"Self-sufficiency" arguments coming from tech nerds are so tiring.

torben-friis 4 hours ago | parent | prev [-]

No, and that's the reason we're now paying twice what we paid a couple years ago. But I can write software at home.

We're already vulnerable to enshittification in so many areas, why increase the list? How does that work in my favor at all?

mchaver 4 hours ago | parent | prev | next [-]

> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

Following the model of how startups have worked for the last 20 years or so, I expect agents to eventually be locked-down/nerfed/ad-infested for higher payments. We are enjoying the fruits of VC money at the moment and they are getting everyone addicted to agents. Eventually they need to turn a profit.

Not sure how this plays out, but I would hang on to any competencies you have for anyone (or business) that wants to stick around in software. Use agents strategically, but don't give up your ability to code/reason/document, etc. The only way I can see this working differently is that there are huge advances in efficiency and open-source models.

foxglacier a minute ago | parent | next [-]

Even when they're profitable, the premium ad-free service will still be cheaper than humans, so those skills will still be mostly useless.

spacechild1 4 hours ago | parent | prev [-]

That's one of several reasons why I'm trying not to rely too much on LLMs. The prospect of only being able to code with a working internet connection and a subscription to some megacorp service is not particularly appealing to me.

gbro3n 5 hours ago | parent | prev | next [-]

I think a good analogy is people not being able to work on modern cars because they are too complex or require specialised tools. True I can still go places with my car, but when it goes wrong I'm less likely to be able to resolve the problem without (paid for) specialised help.

b00ty4breakfast 5 hours ago | parent [-]

And just like modern vehicles rob the user of autonomy, so too for coding agents. Modern tech moves further and further away from empowering normal people and increasingly serves to grow the influence of corporations and governments over our day to day lives.

It's not inherent, but it is reality unless folks stop giving up agency for convenience. I'm not holding my breath.

duskdozer 5 hours ago | parent [-]

Soon enough we'll have ads playing in our cars at stoplights.

ipaddr 4 hours ago | parent [-]

We have that now for most people, the radio. But hopefully we will be in a llm self driving car and can get ads for the entire trip.

jurgenaut23 5 hours ago | parent | prev | next [-]

I understand your point, but this is a purely utilitarian view and it doesn’t account for the fact that, even if agents may do everything, it doesn’t mean they should, both in a normative and positive sense.

There is a vast range of scenarios in which being more or less independent from agents to perform cognitive tasks will be both desirable and necessary, at the individual, societal and economic level.

The question of how much territory we should give up to AI really is both philosophical and political. It isn’t going to be settled in mere one-sided arguments.

sd9 5 hours ago | parent [-]

The people who pay my bills operate in a largely utilitarian fashion.

They’re not going to pay me to manually program because I find it more enjoyable, when they can get Bob to do twice as much for less.

This is why I say I don’t like it, but it is what it is.

codemonkey5 5 hours ago | parent | prev | next [-]

Some people probably enjoyed writing assembly (I am not one of those people, especially when I had to do it on paper in university exams) and code agents probably can do it well - but for the hard tasks, the tasks that are net new, code agents will produce bad results and you still need those people who enjoy writing that to show the path forward.

Code agents are great template generators and modifiers but for net new (innovative! work it‘s often barely usable without a ton of handholding or „non code generation coding“

zozbot234 5 hours ago | parent | prev | next [-]

> I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading.

You're still working on intellectually stimulating programming problems. AI doesn't go all the way with any reliability, it just provides some assistance. You're still ultimately responsible for getting things right, even with key AI help.

nidnogg 6 hours ago | parent | prev | next [-]

I don't like it either. But what is really guaranteeing other markets from flunking similarly later on? What's to say other jobs are going to be any better? Back in college, most of my peers would say "I'm not cut out for anything else. This is it". They were, sure enough, computer and/or math people at heart from an early age.

More importantly, what's gonna be the next stable category of remote-first jobs that a person with a tech-adjacent or tech-minded skillset can tack onto? That's all I care about, to be honest.

I may hate tech with a passion at times and be overly bullish on its future, but there's no replacing my past jobs which have graced me and many others with quality time around family, friends, nature and sports while off work.

sd9 6 hours ago | parent [-]

I don’t know, it’s only since about December that I felt things really start to shift, and February when my job started to become very different.

Personally I’m looking at more physical domains, but it’s early days in my exploration. I think if I wanted to stick to remote work (which I have enjoyed since 2020), then the AI story would just keep playing out.

I’m also totally open to taking a big pay cut to do something I actually enjoy day to day, which I guess makes it easier.

throwanem 5 hours ago | parent [-]

So recent? I've been on sabbatical (the real kind, self-funded) for eighteen months, and while my sense has been things have not stopped heading downhill since I stepped off the ride back in 2024, to hear of such a sudden step change is somewhat novel. "Very different" just how, if you don't mind my asking?

(I'm also looking for local, personally satisfying work, in exchange for a pay cut. Early days, and I am finding the profession no longer commands quite the social cachet it once did, but I'm not foolish enough to fail to price for the buyer's market in which we now seek to sell our labor. Besides, everyone benefits from the occasional reminder to humility! "Memento mori" and all that.)

nidnogg 19 minutes ago | parent | next [-]

Don't you feel that sabbaticals kinda get you off the new tech wave anyway? I usually check in on news much more often when bored at slow work days.

On the side, this might not have to do at all with your case, but the reason I personally keep putting off sabbaticals is that I feel it can severely compound my routine wrecking habits and I don't think I'd be too strong-willed to give it meaningful purpose. Not to mention the first point, i.e. it would 100% make my industry pessimism worse. I'd like to not bounce away from tech forever. Rather, figure what scratches the same itch I've been seeking since the start.

I'm all about big road trips, big adventures but I think the couch potato risk is all too real for me.

sd9 5 hours ago | parent | prev [-]

I feel like the models and harnesses had a step change in capability around December, as somebody who’s been using them daily since early/mid 2025. It’s gone from me doing the majority of the programming, to me doing essentially none, since December. And that change felt quite sudden.

The more recent shift after December is mostly explained by people at my company catching up with the events that happened in December. And that’s more about drastically increased productivity expectations, layoffs, etc.

I’m also considering a self funded sabbatical. I could do it. What sort of thing have you been up to, any advice?

nidnogg 15 minutes ago | parent | next [-]

I can relate to the feeling - this timing tracks for when most, if not all of my friends, all my co-workers (even the few who were resisting to adopt any AI toloing) flocked to just "Claude Code". Similar to how the masses gobbled VS Code a while back.

Company started doling out Claude Code configs, everything is now cli/agentic AI harnessed and news about "90% of this company's code is now AI Generated" pop up every other day.

It seems the last frontier to breach before this was nailing agentic black boxes to not crap out during the first hour of work. After that, it's really been much smoother for those tools.

throwanem 5 hours ago | parent | prev [-]

Uh, don't come into it expecting to know exactly what you're going to be up to, might be the best advice I could give. Oh, do plan! But loosely: especially early on, as you get out from under the crushing burden of constant stress and misery, there will be surprises. I haven't been doing a lot of hobby programming, for example, not much more than a few faces for my Amazfit wristwatch - but my diary's grown by about a thousand pages, well above the usual rate, and I've begun a new series of crappy-camera snapshot albums, this latter especially being a real surprise despite that I have been a photographer for many years now. (My daily driver since 2021 has been a Nikon D850 with three SB-R200 flashes on a ring mount, mostly chasing wild wasps to get their portraits from six inches away. Shooting a total piece of shit for a change has been a hilarious revelation!)

Imagination operates more freely and foolishness is less heavily ballasted, and any kind of emotional crap you've been keeping shoved to the side with the force of pressing obligations is likely to come out and start rearranging the metaphorical furniture. If you've got stuff like that, this will be a good opportunity to get to grips with it, whether you mean to or not. Prepare accordingly.

And finally, there's not too many more appealing social presentations in my experience than that deriving from the confident knowledge that, within reason at least, one has earned and is now deploying the privilege to do more or less whatever the hell one likes: not the confidence contingent on a fat wallet, but that inherent in having only those scheduled obligations one chooses, and also in understanding precisely the difference underlying that distinction. Very few people in this world have the skill to behave as if their time were entirely their own to command, and this makes a difference in deportment that others will notice and attend without necessarily knowing why. It is more subtle and far less brash than the confidence in wielding the name of an employer that everyone knows, but for like reasons it also has worth and durability which the other does not. Whether or not you keep it, the experience of having had it is about as unforgettable and as indescribable as the trick to riding a bike.

Thanks for the info! My last direct exposure to a frontier model was now almost twelve months ago, so I suppose I'll have to dedicate a few hours pretty soon.

loeg 4 hours ago | parent | prev | next [-]

Being able to deliver junior-level work isn't the goal of training juniors.

bakugo 5 hours ago | parent | prev | next [-]

Bob can't do things, Bob's AI can do things that Bob asks it to do. And the AI can only do things that have been done before, and only up to a certain level of complexity. Once that level is reached, the AI can't do things anymore, and Bob certainly isn't going to do anything about that, because Bob doesn't know how to do anything himself. One has to question what value Bob himself even brings to the table.

But let's assume Bob continues to have an active role, because the people above him bought in to the hype and are convinced that "prompt engineer" is the job of the future. When things inevitably start falling apart because the Bobs of the world hit a wall and can't solve the problems that need to be solved (spoiler: this is already happening), what do we do? We need Alices to come in and fix it, but the market actively discourages the existence of Alice, so what happens when there are no more Alices left? Do we just give up and collectively forget how to do things beyond a basic level?

I have a feeling that, yes, we as a species are just going to forget how to do things beyond a certain level. We are going to forget how to write an innovative science paper. We are going to forget how to create websites that aren't giant, buggy piles of React spaghetti that make your browser tab eat 2GB of RAM. We've always been forgetting, really - there are many things that humans in the past knew how to do, but nobody knows how to do today, because that's what happens when the incentive goes missing for too long. Price and convenience often win over quality, to the point that quality stops being an option. This is a form of evolutionary regression, though, and negatively affects our quality of life in many ways. AI is massively accelerating this regression, and if we don't find some way to stop it, I believe our current way of life will be entirely unrecognizable in a few decades.

thepasch 5 hours ago | parent [-]

The question is whether it’s more important to be able to do things, or more important to have a good sense and a keen eye for what to do at any given moment. I personally think both are really important, and I also think AI won’t be able to do both better than any human could for another while, and moreso when it comes to doing both at the same time (though I’m not going to claim it’s never going to).

My point is that both Alice and Bob have a place in this world. In fact, Bob isn’t really doing much different from what a Pricipal Investigator is already doing today in a research context.

lelanthran 5 hours ago | parent [-]

> The question is whether it’s more important to be able to do things, or more important to have a good sense and a keen eye for what to do at any given moment.

Those aren't mutually exclusive.

"People who do things" can do both, and doing the latter is a function of doing the former, so they tend to do the latter sufficiently well.

"People who prompt things" can only do the latter, and they routinely do it poorly.

thepasch 4 hours ago | parent [-]

> “People who prompt things” can only do the latter, and they routinely do it poorly.

Right, but what I don’t agree with here is the idea that this category of people will never be able to improve into the first category of people. The value of an experienced anything is that they realize there is a big chasm between something that works now and something that will continue to work long into the future.

I don’t agree that doing everything yourself manually is the only thing that can grant you that understanding, because I don’t think that understanding is domain-specific. It evolves naturally as soon as someone realizes that their list of unknown unknowns is FAR larger than their list of known anythings, and that the first step in attempting to solve a problem is to prune that list as far as you can get it while realizing you will never ever be able to reduce it to zero.

You can do that by spending two weeks to build a brick wall by hand, or you can do that by spending two weeks having your magical helpers build ten brick walls that eventually collapse. I don’t think the tools are some sort of fundamental threat to cognition, I think they’re - within this society - a fundamental threat to safety, because the relentless pursuit of profit means even those that realize those ten brick walls should never actually ever be used to hold anything up will find themselves pressured to put a roof on them and hope, pray, they hold.

And this isn’t an LLM-specific thing. The vast diverse space of building codes around the world proves this, and coincidentally, the countries with laxer building codes tend to get a lot more done a lot faster; and they also tend to deal with a big tragic collapse every now and then, which I suppose someone will file away as collateral somewhere.

Jensson 3 hours ago | parent | next [-]

> I don’t agree that doing everything yourself manually is the only thing that can grant you that understanding, because I don’t think that understanding is domain-specific. It evolves naturally as soon as someone realizes that their list of unknown unknowns is FAR larger than their list of known anythings, and that the first step in attempting to solve a problem is to prune that list as far as you can get it while realizing you will never ever be able to reduce it to zero.

This isn't true, a car mechanic never evolves into an engineer, a nurse never evolve into a doctor. A car mechanic can learn to do some tasks you normally need an engineer for and same with nurses, but they never build the entire core set of skills that separates engineers from mechanics and doctors from nurses.

There are maybe some exceptions to this, but those exceptions are so rare that it doesn't matter for this discussion. A few people still learning it properly wont save anything.

thepasch 2 hours ago | parent [-]

> This isn’t true, a car mechanic never evolves into an engineer, a nurse never evolve into a doctor.

“Doesn’t generally happen” =/= “is literally impossible”. The word “never” should be used with care.

> A car mechanic can learn to do some tasks you normally need an engineer for and same with nurses

This statement can only make sense if you regard titles as something that’s imbued upon you, and until it is, you are incapable of performing the acts that someone who has earned that tile can perform. I’ll just say I fundamentally disagree with this notion on pretty much every conceivable level, and if that’s the belief system you subscribe to, that would also makes arguing about this any further pointless. But I might just be getting you wrong.

Peritract 2 hours ago | parent | prev [-]

> the idea that this category of people will never be able to improve into the first category of people

The fundamental difference between the categories is that the first is filled with people who put the effort in to learning/understanding, and the second is filled with people who take the shortcut around learning/understanding.

Changing from the second category to the first is something that would require already being in the first.

thepasch an hour ago | parent [-]

> The fundamental difference between the categories is that the first is filled with people who put the effort in to learning/understanding, and the second is filled with people who take the shortcut around learning/understanding.

Exactly! That’s my entire point. Because now you’re separating the categories by “is willing to put in effort” and “is not willing to put in effort” rather than by “has done the thing” and “hasn’t done the thing”.

I think the disagreement doesn’t lie in this concept, but rather in whether an LLM can be used by someone who’s willing to put in effort to assist them in doing so, rather than just having it do it for them. But as long as you understand what the thing you’re using it is for, you don’t have to understand how it works exactly. You can shift gears in a car without a physics degree.

pigeons 3 hours ago | parent | prev | next [-]

> So if Bob can do things with agents, he can do things.

But he does things wrong.

coldtea 4 hours ago | parent | prev | next [-]

>The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

He'll get things (papers, code, etc) which he can't evaluate. And the next round of agents will be trained on the slop produced by the previous ones. Both successive Bob's and successive agents will have less understanding.

edbmiller69 3 hours ago | parent | prev | next [-]

No - you need to understand the details in order to do the “high level” work.

atoav 4 hours ago | parent | prev | next [-]

The thing is Bob can use HammerAsAService™ to put in a nail. It is so cheap! Way cheaper than buying an actual hammer.

The problem with unlearning generic tools and relying on ones you rent by big corporations is that it is unreliable in the long term. The prices will be rising. The conditions will worsen. Oh nice that Bob made a thing using HammerAsAService™, but the terms of conditions (changing once a week) he accepted last week clearly say it belongs to the company now. Bob should be happy they are not suing him yet, but Bob isn't sure whether the thing that came out a month after was independently developed by that company or not just a clone of his work. Bob wishes he knew how to use a hammer.

thepasch 2 hours ago | parent [-]

The majority of nails people might want to rent a HammerAsAService for these days can already easily be put in by open source hammers you can run on consumer, uh… workbenches.

Peritract 2 hours ago | parent [-]

Not to stretch the metaphor too far, but those workbenches require understanding (and hammers) to set up.

Will the paid tools always tell their users how to use the free versions, and if not, how will the users learn to do it independently?

thepasch an hour ago | parent [-]

> Will the paid tools always tell their users how to use the free versions, and if not, how will the users learn to do it independently?

The same way any open-source infrastructure finds widespread use, I’d say. If you’re willing to put in the elbow grease, you can probably set it up yourself (maybe even with the help of one of the frontier, uh, hammers, in its free tier). Or there might be services that act as middlemen to make it all more convenient and cheaper. But the difference is that if Service X pisses you off, then there will be Services Y, Z, A, and B who sell the same service using the same open-source infrastructure, so you always have a choice.

If you don’t like GitHub, try Gitlab, Codeberg, Gitea, and so forth. Or Bitbucket or Azure DevOps. (Don’t actually, though.)

username223 3 hours ago | parent | prev | next [-]

> I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.

It’s not for me. Being a middle manager, with all of the liability and none of the agency, is not what I want to do for a living. Telling a robot to generate mediocre web apps and SVGs of penguins on bicycles is a lousy job.

troupo 6 hours ago | parent | prev | next [-]

> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

Can he? If he outsources all his thinking and understanding to agents, can he then fix things he doesn't know how to fix without agents?

Any skill is practice first and foremost. If Bob has had no practice, what then?

sd9 6 hours ago | parent [-]

My point is it doesn’t matter whether he can fix things without agents. The real world isn’t an exam hall where your boss tells you “no naughty AI!”, you just get stuff done, and if Bob can do that with agents, nobody cares how he did it.

kelnos 5 hours ago | parent | next [-]

But can Bob actually do that with agents, without limit? Right now, he's going to hit a ceiling at some point, and the Alices of the world will run circles around him.

The question is: will agents improve to the point that even the most capable Alices will never be needed to solve problems? Maybe? Maybe not? I'm worried that they won't improve to that degree.

And even if they do, what is the purpose of humans in this world?

duskdozer 5 hours ago | parent [-]

I think the real issue is that no, he can't, but corporate and government entities that decide won't care. Things will simply get worse. The problems will be left to fester as things that simply "can't be done".

troupo 5 hours ago | parent | prev [-]

> The real world isn’t an exam hall where your boss tells you “no naughty AI!”, you just get stuff done, and if Bob can do that with agents, nobody cares.

Indeed. That's why Anthropic had to hire real engineers to make sure their vibe-coded shit doesn't consume 68GB of RAM. Because real world: https://x.com/jarredsumner/status/2026497606575398987

sd9 5 hours ago | parent | next [-]

If your job has been totally unaffected by AI, then I am jealous.

I’m not trying to argue that AI can do everything today. I acknowledge that there are many things that it is not good at.

kelnos 5 hours ago | parent [-]

But do you believe that they'll continue to improve until they're good at everything, all the time, in ways a human can never match?

If yes, then that's dangerously optimistic. If not, then we'll always need humans who have learned the "hard way" (the Alices, not the Bobs). But if LLMs make it impossible for Alices to come up in the field, we're screwed.

sd9 5 hours ago | parent [-]

I think that a lot of software engineering work is a lot simpler than people like to think, and that the demand for Alices is far outweighed by the demand for Bobs. I think there will always be a place for Alices, but there will be a drastic reduction in the workforce. I think all of this unconditionally about future improvement in AI - in my view the models today are more than capable of bringing about this shift, it will just take time.

imtringued 3 hours ago | parent | prev [-]

Anthropic is still getting weekly memory leak reports with memory leaking at a rate of 61GB/h and all of them are getting closed automatically as duplicates.

I personally haven't tried Claude Code because I can't install it on my PC. I'm starting to get the impression that they banned non Claude products from using their subscription, because their products are of such a poor quality that everyone is fleeing from them.

plato65 6 hours ago | parent | prev | next [-]

> So if Bob can do things with agents, he can do things.

I think the key issue is whether Bob develops the ability to choose valuable things to do with agents and to judge whether the output is actually right.

That’s the open question to me: how people develop the judgment needed to direct and evaluate that output.

mattmanser 5 hours ago | parent [-]

There's a long, detailed, often repeated answer to your open question in the article.

Namely, if you can't do it without the AI, you can't tell when it's given you plausible sounding bullshit.

So Bob just wasted everyone's time and money.

carlosjobim 4 hours ago | parent [-]

You can verify by running the code and see if it works.

lowsong 29 minutes ago | parent | prev | next [-]

> agents aren’t going away

Why not? Once the true cost of token generation is passed on to the end user and costs go up by 10 or 100 times, and once the honeymoon delusion of "oh wow I can just prompt the AI to write code" fades, there's a big question as to if what's left is worth it. If it isn't, agents will most certainly go away and all of this will be consigned to the "failed hype" bin along with cryptocurrency and "metaverse".

croes 4 hours ago | parent | prev | next [-]

> The thing is, agents aren’t going away.

Let’s wait until they a business model that creates profit.

Most of them won’t go away, but many will become outdated or slow or enshittificated.

Imagine building your career based on the quality of google‘s search

voxleone 2 hours ago | parent | prev | next [-]

[dead]

rustyhancock 5 hours ago | parent | prev [-]

The whole premise is bad. If the supervisor can do it in 2 months, then they can do it in 2 weeks with AI.

Didn't PhD projects used to be about advancing the state of art?

Maybe we'll get back to that.