| ▲ | echelon 3 hours ago | ||||||||||||||||||||||||||||||||||||||||||||||
Local is a dead end. Open source efforts need to give up on local AI and embrace cloud compute. We need to stop building toy models to run on RTX and instead try to compete with the hyperscalers. We need open weights models that are big and run on H200s. Those are the class of models that will be able to compete. When the hyperscalers reach take off, we're done for. If we can stay within ~6months, we might be able to slow them down or even break them. If there was something 80-90% as good as Opus or Seedance or Nano Banana, more of the ecosystem would switch to open source because it offers control and sovereignty. But we don't have that right now. If we had really competitive open weights models, universities, research teams, other labs, and other companies would be able to collaboratively contribute to the effort. Everyone in the open source world is trying to shrink these models to fit on their 3090 instead, though, and that's such a wasted effort. It's short term thinking. An "OpenRunPod/OpenOpenRouter" + one click deploy of models just as good as Gemini will win over LMStudio and ComfyUI trying to hack a solution on your own Nvidia gaming card. That's such a tiny segment of the market, and the tools are all horrible to use anyway. It's like we learned nothing from "The Year of Linux on Desktop 1999". Only when we realized the data center was our friend did we frame our open source effort appropriately. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | zozbot234 3 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
> We need open weights models that are big and run on H200s. We have this class of models already, Kimi 2.5 and GLM-5 are proper SOTA models. Nemotron might also release a larger-sized model at some time in the future. With the new NVMe-based offload being worked on as of late you can even experiment with these models on your own hardware, but of course there's plenty of cheap third-party inference platforms for these too. | |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | lpcvoid 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
> Open source efforts need to give up on local AI and embrace cloud compute. Oh god no, please not more slop, you're already consuming over 1 percent of human energy output, could you, like, chill a bit? | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | gessha 3 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Man, going to personal computing was a mistake, we should’ve stayed jacked to the mainframes /s | |||||||||||||||||||||||||||||||||||||||||||||||