| ▲ | Aurornis a day ago | ||||||||||||||||
> You never know what the future will bring, AI will be enshittified and so will hubs like huggingface. If anyone wants to bet that future cloud hosted AI models will get worse than they are now, I will take the opposite side of that bet. > It’s useful to have an off grid solution that isn’t subject to VCs wanting to see their capital returned. You can pay cloud providers for access to the same models that you can run locally, though. You don’t need a local setup even for this unlikely future scenario where all of the mainstream LLM providers simultaneously decided to make their LLMs poor quality and none of them sees this as market opportunity to provide good service. But even if we ignore all of that and assume that all of the cloud inference everywhere becomes bad at the same time at some point in the future, you would still be better off buying your own inference hardware at that point in time. Spending the money to buy two M3 Ultras right now to prepare for an unlikely future event is illogical. The only reason to run local LLMs is if you have privacy requirements or you want to do it as a hobby. | |||||||||||||||||
| ▲ | CamperBob2 a day ago | parent [-] | ||||||||||||||||
If anyone wants to bet that future cloud hosted AI models will get worse than they are now, I will take the opposite side of that bet. OK. How do we set up this wager? I'm not knowledgeable about online gambling or prediction markets, but further enshittification seems like the world's safest bet. | |||||||||||||||||
| |||||||||||||||||