| ▲ | simonw 2 days ago | |
It's rare to find a local model that's capable of running tools in a loop well enough to power a coding agent. I don't think gpt-oss:20b is strong enough to be honest, but 120b can do an OK job. Nowhere NEAR as good as the big hosted models though. | ||
| ▲ | ontouchstart 2 days ago | parent | next [-] | |
Think of it as the early years of UNIX & PC. Running inferences and tools locally and offline opens doors to new industries. We might not even need client/server paradigm locally. LLM is just a probabilistic library we can call. | ||
| ▲ | AlexCoventry 2 days ago | parent | prev [-] | |
Thanks. | ||