▲ | tedivm 2 days ago | |||||||
This is only if you ignore the growing open source models. I'm running Qwen3-30B at home and it works great for most of the use cases I have. I think we're going to find that the optimizations coming from companies out of China are going to continue making local LLMs easier for folks to run. | ||||||||
▲ | DSingularity 2 days ago | parent [-] | |||||||
What hardware do you use ? | ||||||||
|