| ▲ | pimeys 14 hours ago | |
Why it needs to work only on a Mac? And why is that better than running the gpt oss with llama.cpp and codex on my Linux box? | ||
| ▲ | adam_patarino 13 hours ago | parent [-] | |
Our model is bigger and more capable than gpt OSS and can run at full context at 40 tokens / s. We are rolling out to Mac to start with plans to release windows and Linux within 3 months. | ||