Remix.run Logo
cheeseblubber 2 days ago

It make sense if you account for cost of intelligence getting cheaper every year. Most of the models per unit of intelligence is getting far cheaper. We get better hardware, architecture, training techniques, inference optimizations and caching. All those improvements add up. In in early 2022 you were getting 10x cheaper annually now is closer to 2x - 5x cheaper annually. The cost is still dropping where as Uber can only get the cost down by so much.

mkesper 2 days ago | parent [-]

Better hardware would have to be bought with additional money. And no one can forecast reliably how much optimization is left in the game.

cheeseblubber a day ago | parent [-]

My problem with the article is that they don't even mention this fact. The metaphors with Uber often is brought up but it breaks down at cost optimization. It also wouldn't be fair to say we are at the peak efficiency of LLMs and that there wouldn't be any improvements left.