| ▲ | raw_anon_1111 5 hours ago | |||||||||||||||||||||||||
In the history of cloud computing, prices have mostly only come down especially as inference becomes a commodity. Realistically, just looking at Mac prices, the cost of a computer with decent local inference would be around $6000 per person. The world is not moving back to on prem. | ||||||||||||||||||||||||||
| ▲ | Aurornis 2 hours ago | parent | next [-] | |||||||||||||||||||||||||
> Realistically, just looking at Mac prices, the cost of a computer with decent local inference would be around $6000 per person. As someone who has hardware in that price range and plays with local LLMs: The gap between Opus or GPT and the local models is still very large for work beyond simple queries. Self-hosted also starts making my office hot due to all of the power consumption when I use it for anything more than short queries. If you haven't heard your Mac's fans spin up much yet, running local LLMs will get you acquainted with the sound of their cooling systems at full blast. | ||||||||||||||||||||||||||
| ▲ | esseph 4 hours ago | parent | prev [-] | |||||||||||||||||||||||||
> The world is not moving back to on prem. Lol, you should tell my customers (that are moving back on prem) that! You should also tell Microsoft, who just yesterday said they are going back to focusing on local apps. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||