Remix.run Logo
potamic 3 hours ago

> You can do local AI inference and get Claude Opus-level performance (Kimi K2.5) over a cluster of Mac Studios with Exo.Labs

Does it do distributed inference? What kinda token speeds do you get?