| ▲ | dhpe 2 days ago |
| I have programmed 30K+ hours. Do LLMs make bad code: yes all the time (at the moment zero clue about good architecture). Are they still useful: yes, extremely so. The secret sauce is that you'd know exactly what to do without them. |
|
| ▲ | dejv 2 days ago | parent | next [-] |
| "Do LLMs make bad code: yes all the time (at the moment zero clue about good architecture). Are they still useful: yes, extremely so." Well, lets see how all the economics will play out. LLMs might be really useful, but as far as I can see all the AI companies are not making money on inference alone. We might be hitting plateau in capabilities with money being raised on vision of being this godlike tech that will change the world completely. Sooner or later the costs will have to meet the reality. |
| |
| ▲ | Aurornis 2 days ago | parent | next [-] | | > but as far as I can see all the AI companies are not making money on inference alone The numbers aren’t public, but from what companies have indicated it seems inference itself would be profitable if you could exclude all of the R&D and training costs. But this debate about startups losing money happens endlessly with every new startup cycle. Everyone forgets that losing money is an expected operating mode for a high growth startup. The models and hardware continue to improve. There is so much investment money accelerating this process that we have plenty of runway to continue improving before companies have to switch to full profit focus mode. But even if we ignore that fact and assume they had to switch to profit mode tomorrow, LLM plans are currently so cheap that even a doubling or tripling isn’t going to be a problem. So what if the monthly plans start at $40 instead of $20 and the high usage plans go from $200 to $400 or even $600? The people using these for their jobs paying $10K or more per month can absorb that. That’s not going to happen, though. If all model progress stopped right now the companies would still be capturing cheaper compute as data center buildouts were completed and next generation compute hardware was released. I see these predictions as the current equivalent of all of the predictions that Uber was going to collapse when the VC money ran out. Instead, Uber quietly settled into steady operation, prices went up a little bit, and people still use Uber a lot. Uber did this without the constant hardware and model improvements that LLM companies benefit from. | | |
| ▲ | mtone a day ago | parent [-] | | > if you could exclude all of the R&D and training costs LLMs have a short shelf-life. They don't know anything past the day they're trained. It's possible to feed or fine-tune them a bit of updated data but its world knowledge and views are firmly stuck in the past. It's not just news - they'll also trip up on new syntax introduced in the latest version of a programming language. They could save on R&D but I expect training costs will be recurring regardless of advancements in capability. |
| |
| ▲ | Workaccount2 2 days ago | parent | prev | next [-] | | If the tech plateaus today, LLM plans will go to $60-80/mo, Chinese-hosted chinese models will be banned (national security will be the given reason), and the AI companies will be making ungodly money. I'm not gonna dig out the math again, but if AI usage follows the popularity path of cell phone usage (which seems to be the case), then trillions invested has a ROI of 5-7 years. Not bad at all. | | |
| ▲ | blks a day ago | parent | next [-] | | Develops will be paying, other people that use it for emails or bun baking recipies - won’t. | |
| ▲ | iLoveOncall 2 days ago | parent | prev [-] | | OpenAI would still lose money if the basic subscriptions were costing $500 and they had the same amount of subscribers as right now. There's not a single model shop who's ever making any money, let alone ungodly amounts. | | |
| ▲ | Workaccount2 2 days ago | parent | next [-] | | These costs you are referencing are training/R&D costs. Take those largely away, and you are left with inference costs, which are dirt cheap. Now you have a world of people who have become accustomed to using AI for tons of different things, and the enshittification starts ramping up, and you find out how much people are willing to pay for their ChatGPT therapist. | | | |
| ▲ | Der_Einzige 2 days ago | parent | prev [-] | | This is literally lies and total bullshit. They’d be making insane profits at those prices. They don’t have to spend all their cash at once on the 30GW of data centers commitments. Why go on the internet and tell stupid lies? |
|
| |
| ▲ | ImprobableTruth 2 days ago | parent | prev | next [-] | | They're not making money on inference alone because they blow ungodly amounts on R&D. Otherwise it'd be a very profitable business. | | |
| ▲ | daveguy 2 days ago | parent [-] | | Private equity will swoop in, bankrupt the company to shirk the debt of training / R&D, and hold on to the models in a restructuring. +Enshittification to squeeze maximum profit. This is why they're referred to as vulture capitalists. |
| |
| ▲ | mNovak 2 days ago | parent | prev | next [-] | | Doesn't OpenRouter prove that inference is profitable? Why would random third parties subsidize the service for other random people online? Unless you're saying that only large frontier models are unprofitable, which I still don't think is the case but is harder to prove. | |
| ▲ | 20k 2 days ago | parent | prev | next [-] | | This is one of the reasons why I'm surprised to see so many people jump on board. We're clearly in the "release product for free/cheap to gain customers" portion of the enshittification plan, before the company starts making it completely garbage to extract as much money as possible from the userbase Having good quality dev tools is non negotiable, and I have a feeling that a lot of people are going to find out the hard way that reliability and it not being owned by profit seeking company is the #1 thing you want in your environment | |
| ▲ | NitpickLawyer 2 days ago | parent | prev | next [-] | | > but as far as I can see all the AI companies are not making money on inference alone. This was the missed point on why GPT5 was such an important launch (quality of models and vibes aside). It brought the model sizes (and hence inference cost) to more sustainable numbers. Compared to previous SotA (GPT4 at launch, or o1/3 series), GPT5 is 8x-12x cheaper! I feel that a lot of people never re-calibrated their views on inference. And there's also another place where you can verify your take on inference - the 3rd party providers that offer "open" models. They have 0 incentive to subsidise prices, because people that use them often don't even know who serves them, so there's 0 brand recognition (say when using models via openrouter). These 3rd party providers have all converged towards a price-point per billion param models. And you can check those prices, and have an idea on what would be proffitable and at what sizes. Models like dsv3.2 are really really cheap to serve, for what they provide (at least gpt5-mini equivalent I'd say). So yes, labs could totally become profitable with inference alone. But they don't want that, because there's an argument to be made that the best will "keep it all". I hope, for our sake as consumers that it isn't the case. And so far this year it seems that it's not the case. We've had all 4 big labs one-up eachother several times, and they're keeping eachother honest. And that's good for us. We get frontier level offerings at 10-25$/MTok (Opus, gpt5.2, gemini3pro, grok4), and we get highly capable yet extremely cheap models at 1.5-3$/MTok (gemini3-flash, gpt-minis, grok-fast, etc) | |
| ▲ | 2 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | nl 2 days ago | parent | prev [-] | | Anthropic - for one - is making lots of money on inference. |
|
|
| ▲ | qsort 2 days ago | parent | prev | next [-] |
| One of the mental frameworks that convinced me is how much of a "free action" it is. Have the LLM (or the agent) churn on some problem and do something else. Come back and review the result. If you had to put significant effort into each query, I agree it wouldn't be worth it, but you can just type something into the textbox and wait. |
| |
| ▲ | daveguy 2 days ago | parent [-] | | Are you counting the time/effort to evaluate the accuracy and relevance of an LLM left to "think" for a while? |
|
|
| ▲ | ManuelKiessling a day ago | parent | prev | next [-] |
| If I ask a SOTA model to just implement some functionality, it doesn’t necessarily do so using a great architectural approach. Whenever I ask a SOTA model about architecture recommendations, and frame the problem correctly, I get top notch answers every single time. LLMs are terrific software architects. And that’s not surprising, there has to be tons of great advice on how to correctly build software in the training corpus. They simply aren’t great software architects by default. |
| |
| ▲ | Loic a day ago | parent [-] | | You know that if you ask the LLM correctly you get top notch answers, because you have the experience to judge if the answer is top notch or not. I spend a couple of hours per week teaching software architecture to a junior in my team, because he has not the experience to not only ask correctly but also assess the quality of the answer from the LLM. |
|
|
| ▲ | _rpxpx 2 days ago | parent | prev | next [-] |
| OK, maybe. But how many programmers will know this in 10 years' time as use of LLMs is normalized? I like to hear what employers are saying already about recent graduates. |
| |
| ▲ | bartread 2 days ago | parent | next [-] | | They’d have to be hiring recent graduates for you to hear that perspective. And, as much as what I’ve just said is hyperbolically pessimistic, there is some truth to it. In the UK a bunch of factors have coincided to put the brakes on hiring, especially smaller and mid-size businesses. AI is the obvious one that gets all the press (although how much it’s really to blame is open to question in my view), but the recent rise in employer AI contribution, and now (anecdotally) the employee rights bill have come together to make companies quite gunshy when it comes to hiring. | | |
| ▲ | bartread 2 days ago | parent [-] | | *Employer NI contribution, not employer AI contribution - a pox be upon autocorrect |
| |
| ▲ | energy123 2 days ago | parent | prev | next [-] | | I'm uncertain that programming will be a major profession in 10 years. Programming is more like math than creative writing. It's largely verifiable, which is where RL is repeatedly proven to eventually achieve significantly better than human intelligence. Our saving grace, for now, is that it's not entirely verifiable because things like architectural taste are hard to put into a test. But I would not bet against it. | |
| ▲ | spaceman_2020 2 days ago | parent | prev | next [-] | | This is nothing new - entire industries and skills died out as the apprenticeship system and guilds were replaced by automation and factories | |
| ▲ | nutjob2 2 days ago | parent | prev | next [-] | | If they don't learn that they won't get very far. This is true for everything, any tool you might use. Competent users of tools understand how they work and thus their limitations and how they're best put to work. Incompetents just fumble around and sometimes get things working. | |
| ▲ | QuiDortDine 2 days ago | parent | prev [-] | | hahah what are you talking about, there's no such thing as long term! |
|
|
| ▲ | 2 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | feverzsj 2 days ago | parent | prev | next [-] |
| So, it's like taking off your pants to fart. |
|
| ▲ | bilsbie 2 days ago | parent | prev [-] |
| I mean if you leaned heavily on stack overflow before AI then nothing really changes. It’s basically the same idea but faster. |