| ▲ | Towaway69 5 hours ago | ||||||||||||||||||||||||||||
What the article doesn't touch on is the vendor lock-in that is currently underway. Many corps are now moving to an AI-based development process that is reliant on the big AI providers. Once the codebase has become fully agentic, i.e., only agents fundamentally understand it and can modify it, the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments. Sure it will be - perhaps - possible to interchange the underlying AI for the development of the codebase but will they be significantly cheaper? Of course, the invisible hand of the market will solve that problem. Something that OPEC has successfully done for the oil market. Another issue here is once the codebase is agentic and the price for developers falls sufficiently that it will significant cheaper to hire humans again, will these be able to understand the agentic codebase? Is this a one-way transition? I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices and the global economy, fundamentally everything is getting better. | |||||||||||||||||||||||||||||
| ▲ | _the_inflator 2 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
I have similar concerns. We will miss SaaS dearly. I think history is repeating just with DVD and streaming - we simply bought the same movie twice. AI more and more feels the same. Half a year ago Claude Opus was Anthropics most expensive model - boy, using Claude Opus 4.6 in the 500k version is like paying 1 dollar per minute now. My once decent budgets get hit not after weeks but days (!) now. And I am not using agents, subagents which would only multiply the costs - for what? So what we arrive more and more is the same as always: low, medium, luxury tier. A boring service with different quality and payment structures. Proof: you cannot compensate with prompt engineering anymore. Month ago you fixed any model discrepancies by being more clever and elaborate with your prompts etc. Not anymore. There is a hidden factor now that accounts for exactly that. It seems that the reliance on skills and different tiers simply moves us away from prompt engineering which is considered more and more jailbreaking than guidance. Prompt engineering lately became so mundane, I wonder what vendors were really doing by analyzing the usage data. It seems like that vendors tied certain inquiries with certain outcomes modeled by multistep prompting which was reduced internally to certain trigger sentences to create the illusion of having prompted your result while in fact you haven't. All you did was asking the same result thousands of user did before and the LLM took an statistical approach to deliver the result. | |||||||||||||||||||||||||||||
| ▲ | SaucyWrong 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
This is a great point, and I routinely use it as an argument for why seasoned professionals should work hard to keep their skills and why new professionals should build them in the first place. I would never be comfortable leasing my ability to perform detailed knowledge work from one of these companies. Sometimes the argument lands, very often it doesn't. As you said, a common refrain is, "but prices won't go up, cost to serve is the highest it will ever be." Or, "inference is already massively profitable and will become more so in the future--I read so on a news site." And that remark, for me, is unfortunately a discussion-ender. I just haven't ever had a productive conversation with somebody about this after they make these remarks. Somebody saying these things has placed their bets already and are about to throw the dice. | |||||||||||||||||||||||||||||
| ▲ | eaglelamp 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
No one ever asks how much it costs Facebook or Uber to serve requests because it is irrelevant, they set prices to maximize their profit like any good monopolist. Similarly the future cartel of big providers will charge their captive users whatever they can get away with, not the cost of inference. The current discourse around "AI", swarms of agents producing mountains of inscrutable spaghetti, is a tell that this is the future the big players are looking for. They want to create a captive market of token tokers who have no hope of untangling the mess they made when tokens were cheap without buying even more at full price. | |||||||||||||||||||||||||||||
| ▲ | emporas 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
Code is so low entropy that smaller and more economical models will be up to the task the same as gigantic models from big providers are today. No worries there, the huge improvements we see today from GPT and Claude, are at their heart just Reinforcement Learning (CoT, chain of thought and thinking tokens are just one example of many). RL is the cheapest kind of training one can perform, as far as I understand. Please correct me if that's not the case. In the economy the invisible hand manages to produce everything cheaper and better all the time, but in the digital space the open source invisible hand makes everything completely free. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | Aurornis 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
> the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments. I would bet a lot of money that the price of LLM assistance will go down, not up, as the hardware and software advance. Every genre-defining startup seems to go through this same cycle where the naysayers tell us that it's all going to collapse once the investment money runs out. This was definitely true for technologies without use cases (remember the blockchain-all-the-things era?) but it is not true for businesses that have actual users. Some early players may go bust by chasing market share without a real business plan, like the infamous Webvan grocery delivery service. But even Webvan was directionally correct, with delivery services now a booming business sector. Uber is another good example. We heard for years that ridesharing was a fad that would go away as soon as the VC money ran out. Instead, Uber became a profitable company and almost nobody noticed because the naysayers moved on to something else. AI is different because the hardware is always getting faster and cheaper to operate. Even if LLM progress stalled at Opus 4.6 levels today, it would still be very useful and it would get cheaper with each passing year as hardware improved. > I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices Comparing compute costs to oil prices is apples to oranges. Oil is a finite resource that comes out of the ground and the technology to extract it doesn't improve much over decades. AI compute gets better and cheaper every year because the technology advances rapidly. GPU servers that were as expensive as cars a few years ago are now deprecated and available for cheap because the new technology is vastly faster. The next generation will be faster still. If you're mentally comparing this to things like oil, you're not on the right track | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | fantasizr 4 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
this is a good point. Some of the ai companies are trying to hook cs students so they'll only know "dev" as a function of their products. First one's free as they say (the drug dealers). | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||