Remix.run Logo
raincole 7 hours ago

> the day when LLM-assisted coding is commoditized

Like yesterday? LLM-assisted coding is $100/mo. It looks very commoditized when most houses in developed world pay more for electricity than that.

My definition of LLM-assisted coding is that you fully understand every change and every single line of the code. Otherwise it's vibe coding. And I believe if one is honest to this principle, it's very hard to deplete the quota of the $100 tier.

windexh8er 6 hours ago | parent | next [-]

> Like yesterday? LLM-assisted coding is $100/mo. It looks very commoditized when most houses in developed world pay more for electricity than that.

But, it's not $100/mo. I think the best showcase of where AI is at is on the generative video side. Look at players like Higgsfield. Check out their pricing and then go look at Reddit for actual experiences. With video generation the results are very easy to see. With code generation the results are less clear for many users. Especially when things "just work".

Again, it's not $100/month for Anthropic to serve most uses. These costs are still being subsidized and as more expensive plans roll out with access to "better" models and "more* tokens and context the true cost per user is slowly starting to be exposed. I routinely hit limits with Anthropic that I hadn't been for the same (and even less) utilization. I dumped the Pro Max account recently because the value wasn't there anymore. I am convinced that Opus 3 was Anthropic's pinnacle at this point and while the SotA models of today are good they're tuned to push people towards paying for overages at a significantly faster consumption rate than a right sized plan for usage.

The reality is that nobody can afford to continue to offer these models at the current price points and be profitable at any time in the near future. And it's becoming more and more clear that Google is in a great position to let Anthropic and OAI duke it out with other people's money while they have the cash, infrastructure and reach to play the waiting game of keeping up but not having to worry about all of the constraints their competitors do.

But I'd argue that nothing has been commoditized as we have no clue what LLMs cost at scale and it seems that nobody wants to talk about that publicly.

KaiserPro 6 hours ago | parent [-]

> I think the best showcase of where AI is at is on the generative video side. Look at players like Higgsfield. Check out their pricing and then go look at Reddit for actual experiences. With video generation the results are very easy to see

Video is a different ballgame entirely, its less than realtime on _large_ gpus. moreover because of the inter-frame consistency its really hard to transfer and keep context

Running inference on text is, or can be very profitable. its research and dev thats expensive.

windexh8er 5 hours ago | parent [-]

My point wasn't the delta in work between video and text generation. It was that the degradation of a prompt is much more visible (because: literal). But, generally agree on the research/dev part.

sidrag22 6 hours ago | parent | prev | next [-]

> fully understand every change and every single line of the code.

im probably just not being charitable enough to what you mean, but thats an absurd bar that almost nobody conforms to even if its fully handwritten. nothing would get done if they did. But again, my emphasis is on that im probably just not being charitable to what you mean.

Maxatar 6 hours ago | parent | next [-]

You're most likely being pedantic, like when someone says they understand every single line of this code:

    x = 0
    for i in range(1, 10):
      x += i
    print(x)
They don't mean they understand silicon substrate of the microprocessor executing microcode or the CMOS sense amplifiers reading the SRAM cells caching the loop variable.

They just mean they can more or less follow along with what the code is doing. You don't need to be very charitable in order to understand what he genuinely meant, and understanding code that one writes is how many (but not all) professional software developers who didn't just copy and paste stuff from Stackoverflow used to carry out their work.

sidrag22 6 hours ago | parent | next [-]

you drew it to its most uncharitable conclusion for sure, but ya thats pretty much the point i was making.

How deep do i need to understand range() or print() to utilize either, on the slightly less extreme end of the spectrum.

But ya, im pretty sure its a point that maybe i coulda kept to myself and been charitable instead.

_puk 5 hours ago | parent | prev [-]

Understand your code in this day and age likely means hit the point of deterministic evaluation.

print(X) is a great example. That's going to print X. Every time.

Agent.print(x) is pretty likely to print X every time. But hey, who knows, maybe it's having an off day.

hunterpayne 12 minutes ago | parent | prev | next [-]

I do. If you don't, maybe you shouldn't be writing software professionally. And yes, I've written both DBs and compilers so I do understand what is happening down to the CMOS. I think what you are doing is just cope.

thomasmg 6 hours ago | parent | prev | next [-]

Well that is how it mostly worked until recently... unless if the developer copied and pasted from stackoverflow without understanding much. Which did happen.

satvikpendem 6 hours ago | parent | prev | next [-]

How is that an absurd bar? If you're handwriting code, you'd need to know what you actually want to write in the first place, hence you understand all the code you write. Therefore the code the AI produces should also be understood by you. Anything else than that is indeed vibe coding.

Maxatar 6 hours ago | parent | next [-]

A lot of developers don't actually understand the code they write. Sure nowadays a lot of code is generated by LLMs, but in the past people just copied and pasted stuff off of blogs, Stack Overflow, or whatever other resources they could find without really understanding what it did or how it worked.

Jeff Atwood, along with numerous others (who Atwood cites on his blog [1]) were not exaggerating when the observed that the majority of candidates who had existing professional experience, and even MSc. degrees, were unable to code very simple solutions to trivial problems.

[1] https://blog.codinghorror.com/why-cant-programmers-program/

sidrag22 6 hours ago | parent | prev [-]

its an absurd bar if you are being a uncharitable jerk like i was, the layers go deep, and technically i can claim I have never fully grasped any of my code. It is likely just a dumb point to bring up tbh.

satvikpendem an hour ago | parent [-]

I saw your reply to another comment [0], I see what you mean now. By "understand each line of code" I meant that one would know how that for loop works not the underlying levels of the implementation of the language. I replied initially because lots of vibe coding devs in fact do not read all the code before submitting, much less actually review it line by line and understand each line.

[0] https://news.ycombinator.com/item?id=47894279

andrewjvb 6 hours ago | parent | prev | next [-]

It's a good point. To me this really comes down to the economics of the software being written.

If it's low-stakes, then the required depth to accept the code is also low.

sbarre 6 hours ago | parent | prev | next [-]

Could they have meant "every line of code being committed by the LLM" within the current scope of work?

That's how I read it, and I would agree with that.

raincole 5 hours ago | parent | prev | next [-]

I mean "understanding it just like when you hand wrote the code in 2019."

Obviously I don't mean "understanding it so you can draw the exact memory layout on the white board from memory."

torben-friis 5 hours ago | parent | prev [-]

You don't understand every change you make in the PRs you offer for review?

fsckboy 5 hours ago | parent | prev | next [-]

>LLM-assisted coding is $100/mo. It looks very commoditized when most houses in developed world pay more for electricity than that.

this is a small nit, but you still have to pay your electric bill, the $100/mo is on top of that. if you're doing cost accounting you don't want to neglect any costs. Just because you can afford to lease a car, doesn't mean you can afford to lease a 2nd car.

rectang 5 hours ago | parent | prev | next [-]

Commoditization will be complete for my purposes when an LLM trained on a legitimately licensed corpus can achieve roughly what Opus 4.5+ or the highest powered GPTs can today.

I anticipate a Napster-style reckoning at some point when there's a successful high-profile copyright suit around obviously derivative output. It will probably happen in video or imagery first.

BowBun 5 hours ago | parent | prev [-]

In industry, the cost is more than 100/mo for engineers. With increased adoption and what I know now, I expect full time devs to rack up $500-$2000 usage bills if they're going full parallel agentic dev. Personal usage for projects and non-production software is not a benchmark IMO

mchusma 5 hours ago | parent | next [-]

I work with a lot of full-time devs, and it is very hard to go beyond the $200 max plan. If you use API credits, and I think the enterprise plan kind of forces you to do this, you can definitely incur this much, particularly if you're not using prompt caching and things like that.

But I and others in my company have very heavy usage. We only rarely, with parallel agentic processes, run out of the $200 a month plan.

And what do I mean by "hard"? I mean, it requires a lot of active thinking to think about how you can actively max it out. I'm sure there's some use cases where maybe it is not hard to do this, but in general, I find most devs can't even max out the $100 a month plan, because they haven't quite figured out how to leverage it to that degree yet.

(Again, if someone is using the API instead of subscription, I wouldn't be surprised to see $2,000 bills.)

ebiester 5 hours ago | parent [-]

Business/Enterprise accounts are billed at $20/seat + API prices, not subscription prices. You can give them a monthly dollar quota or let them go unlimited, but they're not being subsidized like in team. And team can't get a 20x plan from what I can tell.

adastra22 5 hours ago | parent | prev [-]

I routinely use $4k to $5k worth of tokens a month on my $200/mo Max subscription. I don't even code every day.

You can use a Max subscription for work, btw.

hunterpayne 10 minutes ago | parent [-]

You do understand the concept of a subsidy right?