Remix.run Logo
andy99 a day ago

Autonomy generally, not just privacy. You never know what the future will bring, AI will be enshittified and so will hubs like huggingface. It’s useful to have an off grid solution that isn’t subject to VCs wanting to see their capital returned.

Aurornis a day ago | parent | next [-]

> You never know what the future will bring, AI will be enshittified and so will hubs like huggingface.

If anyone wants to bet that future cloud hosted AI models will get worse than they are now, I will take the opposite side of that bet.

> It’s useful to have an off grid solution that isn’t subject to VCs wanting to see their capital returned.

You can pay cloud providers for access to the same models that you can run locally, though. You don’t need a local setup even for this unlikely future scenario where all of the mainstream LLM providers simultaneously decided to make their LLMs poor quality and none of them sees this as market opportunity to provide good service.

But even if we ignore all of that and assume that all of the cloud inference everywhere becomes bad at the same time at some point in the future, you would still be better off buying your own inference hardware at that point in time. Spending the money to buy two M3 Ultras right now to prepare for an unlikely future event is illogical.

The only reason to run local LLMs is if you have privacy requirements or you want to do it as a hobby.

CamperBob2 a day ago | parent [-]

If anyone wants to bet that future cloud hosted AI models will get worse than they are now, I will take the opposite side of that bet.

OK. How do we set up this wager?

I'm not knowledgeable about online gambling or prediction markets, but further enshittification seems like the world's safest bet.

Aurornis a day ago | parent [-]

> but further enshittification seems like the world's safest bet.

Are you really, actually willing to bet that today's hosted LLM performance per dollar is the peak? That it's all going to be worse at some arbitrary date (necessary condition for establishing a bet) in the future?

Would need to be evaluated by a standard benchmark, agreed upon ahead of time. No loopholes or vague verbiage allow something to be claimed as "enshittification" or other vague terms.

CamperBob2 a day ago | parent [-]

Sorry, didn't realize what you were actually referring to. Certainly I'd assume the models will keep getting better from the standpoint of reasoning performance. But much of that improved performance will be used to fool us into buying whatever the sponsor is selling.

That part will get worse, given that it hasn't really even begun ramping up yet. We are still in the "$1 Uber ride" stage, where it all seems like a never-ending free lunch.

chrsw a day ago | parent | prev [-]

Yes, I agree. And you can add security to that too.