| ▲ | nightski 3 hours ago | |||||||||||||
At that cost I'd just buy some GPUs and run a local model though. Maybe a couple RTX 6000s. | ||||||||||||||
| ▲ | organsnyder 3 hours ago | parent | next [-] | |||||||||||||
That's about as much as my Framework Desktop cost (thankful that I bought it before all the supply craziness we're seeing across the industry). In the relatively small amount of time I've spent tinkering with it, I've used a local LLM to do some real tasks. It's not as powerful as Claude, but given the immaturity in the local LLM space—on both the hardware and software side—I think it has real potential. Cloud services have a head-start for quite a few reasons, but I really think we could see local LLMs coming into their own over the next 3-5 years. | ||||||||||||||
| ▲ | gbnwl 3 hours ago | parent | prev | next [-] | |||||||||||||
Same but I imagine once prices start rising the prices of GPUs that can run any decent local models will soar (again) as well. You and I wouldn’t be the only person with this idea right? | ||||||||||||||
| ||||||||||||||
| ▲ | fishpham 3 hours ago | parent | prev [-] | |||||||||||||
Those won’t be sufficient to run SOTA/trillion parameter models | ||||||||||||||
| ||||||||||||||