▲ | diggan 3 days ago | |
Personally been using GPT-OSS-120b locally with reasoning_effort set to `high` and it blows pretty much every other local model out of the water, but takes a lot of time for it to eventually do a proper content reply. But for fire-and-forget jobs like "Create a well-researched report on X from perspective Y" it works really well. | ||
▲ | cyberninja15 3 days ago | parent [-] | |
what machine are you running GPT-OSS-120B on? I'm currently only able to get GPT-OSS-20B working on my macbook using Ollama |