| ▲ | adam_patarino 17 hours ago |
| [dead] |
|
| ▲ | pimeys 15 hours ago | parent | next [-] |
| Why it needs to work only on a Mac? And why is that better than running the gpt oss with llama.cpp and codex on my Linux box? |
| |
| ▲ | adam_patarino 13 hours ago | parent [-] | | Our model is bigger and more capable than gpt OSS and can run at full context at 40 tokens / s. We are rolling out to Mac to start with plans to release windows and Linux within 3 months. |
|
|
| ▲ | yorwba 16 hours ago | parent | prev | next [-] |
| "Join the Waitlist" |
|
| ▲ | edoceo 16 hours ago | parent | prev [-] |
| Mac only :( |
| |
| ▲ | adam_patarino 13 hours ago | parent [-] | | We will have windows and Linux early next year! Just starting with Mac for first beta testers |
|