Remix.run Logo
saagarjha 4 days ago

One does not simply put a 5090 into an existing chip.

giancarlostoro 3 days ago | parent [-]

Not what I am suggesting. However, having trained a few different things on a modest M4 Pro chip (so not even their absolute most powerful chips mind you), and using it for local-first AI inference, I can see the value. A single server could serve an LLM for a small business and cost a lot less than running the same inference through a 5090 in terms of power usage.

I could also see universities giving this type of compute access to students for cheaper to work on more basic less resource intensive models.

saagarjha 3 days ago | parent [-]

I think a 5090 will handily beat it on power usage.