| ▲ | wmf 13 hours ago | |||||||
just want to run a 7-8b model locally This is already solved by running LM Studio on a normal computer. | ||||||||
| ▲ | zozbot234 13 hours ago | parent [-] | |||||||
Ollama or llama.cpp are also common alternatives. But a 8B model isn't going to have much real-world knowledge or be highly reliable for agentic workloads, so it makes sense that people will want more than that. | ||||||||
| ||||||||