Remix.run Logo
542458 2 hours ago

IMO there are two things that make me optimistic that we won’t see a big rug pull where price-to-capability ratio skyrockets relative to today:

* As you’ve noted, people keep finding ways of slamming more intelligence into smaller models, meaning that a given hardware spec delivers more model capability over time.

* Hardware will continue to improve and supply will catch up to demand, meaning that a dollar will deliver more hardware spec over time.

I hope that one day we’ll look back on the current model of “accessing AI through provider APIs” the same way we now look back on “everyone connecting to the company mainframe.”

spacebanana7 an hour ago | parent [-]

I also hope that we’ll find effective ways to distribute load between small local models and heavyweight remote models. Sort of like what Apple tried to do in iOS.

So much of what I ask codex to do doesn’t require full GPT 5 intelligence, and if 75% of the tokens were generated locally that’d save a massive amount of cost.