| ▲ | Dilettante_ a day ago | ||||||||||||||||||||||
But the progress goes both ways: In five years, you would still want to use whatever is running on the cloud supercenters. Just like today you could run gpt-2 locally as a coding agent, but we want the 100x-as-powerful shiny thing. | |||||||||||||||||||||||
| ▲ | mcny a day ago | parent | next [-] | ||||||||||||||||||||||
That would be great if that was the case but my understanding is that the progress is plateauing. I don't know how much of this is anthorpic / Google / openAI holding itself back to save money and how much is the state of the art improvement slowing down though. I can imagine there could be a 64 GB GPU in five years as absurd as it feels to type that today. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | brazukadev a day ago | parent | prev [-] | ||||||||||||||||||||||
Not really, for many cases I'm happy using Qwen3-8B in my computer and would be very happy if I could run Qwen3-Coder-30B-A3B. | |||||||||||||||||||||||