| ▲ | tim-tday 7 hours ago | |
But do you use any ai services like chat gpt, Claude, Gemini? If so you’re offloading your compute from a local stack to a high performance nvidia gpu stack operated by one of the big five. It’s not that you aren’t using new hardware, it’s that you shifted the load from local to centralized. I’m not saying this is bad or anything, it’s just another iteration of the centralized vs decentralized pendulum swing that has been happening in tech since the beginning (mainframes with dumb terminals, desktops, the cloud, mobile) etc. Apple might experience a slowdown in hardware sales because of it. Nvidia might experience a sales boom because of it. The future could very well bring a swing back. Imagine you could run a stack of Mac minis that replaced your monthly Claude code bill. Might pay for itself in 6mo (this doesn’t exist yet but it theoretically could happen) | ||
| ▲ | kouteiheika 7 hours ago | parent [-] | |
> Imagine you could run a stack of Mac minis that replaced your monthly Claude code bill. Might pay for itself in 6mo (this doesn’t exist yet but it theoretically could happen) You don't have to imagine. You can, today, with a few (major) caveats: you'll only match Claude from roughly ~6 months ago (open-weight models roughly lag behind the frontier by ~half a year), and you'd need to buy a couple of RTX 6000 Pros (each one is ~$10k). Technically you could also do this with Macs (due to their unified RAM), but the speed won't be great so it'd be unusable. | ||