| ▲ | Aurornis 4 hours ago | |
> Realistically, just looking at Mac prices, the cost of a computer with decent local inference would be around $6000 per person. As someone who has hardware in that price range and plays with local LLMs: The gap between Opus or GPT and the local models is still very large for work beyond simple queries. Self-hosted also starts making my office hot due to all of the power consumption when I use it for anything more than short queries. If you haven't heard your Mac's fans spin up much yet, running local LLMs will get you acquainted with the sound of their cooling systems at full blast. | ||