Remix.run Logo
libraryofbabel 11 hours ago

I agree with you that everything is changing and that we don’t know what’s coming, but I think you really have to stretch things to imagine that it’s a likely scenario that AI-assisted coding will “dry up and blow away.” You’ll need to elaborate on that, because I don’t think it’s likely even if the AI investment bubble pops. Remember that inference is not really that expensive. Or do you think that things shift on the demand side somehow?

saltcured 9 hours ago | parent | next [-]

I think the "genie" that is out of the bottle is that there is no broad, deeply technical class who can resist the allure of the AI agent. A technical focus does not seem to provide immunity.

In spite of obvious contradictory signals about quality, we embrace the magical thinking that these tools operate in a realm of ontology and logic. We disregard the null hypothesis, in which they are more mad-libbing plagiarism machines which we've deployed against our own minds. Put more tritely: We have met the Genie, and the Genie is Us. The LLM is just another wish fulfilled with calamitous second-order effects.

Though enjoyable as fiction, I can't really picture a Butlerian Jihad where humanity attempts some religious purge of AI methods. It's easier for me to imagine the opposite, where the majority purges the heretics who would question their saints of reduced effort.

So, I don't see LLMs going away unless you believe we're in some kind of Peak Compute transition, which is pretty catastrophic thinking. I.e. some kind of techno/industrial/societal collapse where the state of the art stops moving forward and instead retreats. I suppose someone could believe in that outcome, if they lean hard into the idea that the continued use of LLMs will incapacitate us?

Even if LLM/AI concepts plateau, I tend to think we'll somehow continue with hardware scaling. That means they will become commoditized and able to run locally on consumer-level equipment. In the long run, it won't require a financial bubble or dedicated powerplants to run, nor be limited to priests in high towers. It will be pervasive like wireless ear buds or microwave ovens, rather than an embodiment of capital investment.

The pragmatic way I see LLMs _not_ sticking around is where AI researchers figure out some better approach. Then, LLMs would simply be left behind as historical curiosities.

danaris 8 hours ago | parent [-]

The first half of your post, I broadly agree with.

The last part...I'm not sure. The idea that we will be able to compute-scale our way out of practically anything is so much taken for granted these days that many people seem to have lost sight of the fact that we have genuinely hit diminishing returns—first in the general-purpose computing scaling (end of Moore's Law, etc), and more recently in the ability to scale LLMs. There is no longer a guarantee that we can improve the performance of training, at the very least, for the larger models by more than a few percent, no matter how much new tech we throw at it. At least until we hit another major breakthrough (either hardware or software), and by their very nature those cannot be counted on.

Even if we can squeeze out a few more percent—or a few more tens of percent—of optimizations on training and inference, to the best of my understanding, that's going to be orders of magnitude too little yet to allow for running the full-size major models on consumer-level equipment.

cheevly 7 hours ago | parent [-]

This is so objectively false. Sometimes I can’t believe im even on HN anymore with the level of confidently incorrect assertions made.

danaris 6 hours ago | parent [-]

You, uh, wanna actually back that accusation up with some data there, chief?

cheevly 5 hours ago | parent [-]

Compare models from one year ago (GPT-4o?) to models from this year (Opus 4.5?). There are literally hundreds of benchmarks and metrics you can find. What reality do you live in?

danaris 10 hours ago | parent | prev | next [-]

I think that even if inference is "not really that expensive", it's not free.

I think that Microsoft will not be willing to operate Copilot for free in perpetuity.

I think that there has not yet been any meaningful large-scale study showing that it improves performance overall, and there have been some studies showing that it does the opposite, despite individuals' feeling that it helps them.

I think that a lot of the hype around AI is that it is going to get better, and if it becomes prohibitively expensive for it to do that (ie, training), and there's no proof that it's helping, and keeping the subscriptions going is a constant money drain, and there's no more drumbeat of "everything must become AI immediately and forever", more and more institutions are going to start dropping it.

I think that if the only programmers who are using LLMs to aid their coding are hobbyists, independent contractors, or in small shops where they get to fully dictate their own setups, that's a small enough segment of the programming market that we can say it won't help students to learn that way, because they won't be allowed to code that way in a "real job".

LtWorf 10 hours ago | parent | prev | next [-]

If they start charging what it costs them for example…

libraryofbabel 10 hours ago | parent [-]

There is so much confusion on this topic. Please don't spread more of it; the answers are just a quick google away. To spell it out:

1) AI companies make money on the tokens they sell through their APIs. At my company we run Claude Code by buying Claude Sonnet and Opus tokens from AWS Bedrock. AWS and Anthropic make money on those tokens. The unit economics are very good here; estimates are that Anthropic and OpenAI have a gross margin of 40% on selling tokens.

2) Claude Code subscriptions are probably subsidized somewhat on a per token basis, for strategic reasons (Anthropic wants to capture the market). Although even this is complicated, as the usage distribution is such that Anthropic is making money on some subscribers and then subsidizing the ultra-heavy-usage vibe coders who max out their subscriptions. If they lowered the cap, most people with subscriptions would still not max out and they could start making money, but they'd probably upset a lot of the loudest ultra-heavy-usage influencer-types.

3) The biggest cost AI companies have is training new models. That is the reason AI companies are not net profitable. But that's a completely separate set of questions from what inference costs, which is what matters here.

somewhereoutth 6 hours ago | parent [-]

without training new models, existing models will become more and more out of date, until they are no longer useful - regardless of how cheap inference is. Training new models is part of the cost basis, and can't be hand waved away.

SgtBastard 2 hours ago | parent [-]

Only if you’re relying upon the models to recall facts from its training set - intuitively, at sufficient complexity, models ability to reason is what is critical and can have its answers kept up to date with RAG.

Unless you mean out of date == no longer SOTA reasoning models?

somewhereoutth 6 hours ago | parent | prev [-]

LLMs will stop being trained, as that enormous upfront investment will have been found to not produce the required return. People will continue to use the existing models for inference, not least as the (now bankrupt) LLM labs attempt to squeeze the last juice out of their remaining assets (trained LLMs). However these models will become more and more outdated, less and less useful, until they are not worth the electricity to do the inference anymore. Thus it will end.