Remix.run Logo
engineer_22 8 hours ago

I find that my cell phone which is 4 generations old and my desktop computer which is 2 generations old are totally adequate for everything I need to do, and I do not need faster processing

Lio 8 hours ago | parent | next [-]

I used to think that.

I really don't care about most new phone features and for my laptop the M1 Max is still a really decent chip.

I do want to run local LLM agents though and I think a Mac Studio with an M5 Ultra (when it comes out) is probably how I'm going to do that. I need more RAM.

I bet I'm not the only one looking at that kind of setup now that was previously happy with what they had..

tim-tday 7 hours ago | parent [-]

Apple has made some good progress on memory sharing over thunderbolt. If they could get that ironed out you maybe could run a good LLM on a cluster of Mac minis. Again you cannot today but people are working on it. One guy might have gotten it to work but it’s not ready for prime time yet.

bigyabai 2 hours ago | parent [-]

> Apple has made some good progress on memory sharing over thunderbolt

The only reason that Thunderbolt exists is to expose DMA over an artificial PCI channel. I'd hope they've made progress on it, Thunderbolt has only been around for fourteen years after all.

tim-tday 7 hours ago | parent | prev | next [-]

But do you use any ai services like chat gpt, Claude, Gemini? If so you’re offloading your compute from a local stack to a high performance nvidia gpu stack operated by one of the big five. It’s not that you aren’t using new hardware, it’s that you shifted the load from local to centralized.

I’m not saying this is bad or anything, it’s just another iteration of the centralized vs decentralized pendulum swing that has been happening in tech since the beginning (mainframes with dumb terminals, desktops, the cloud, mobile) etc.

Apple might experience a slowdown in hardware sales because of it. Nvidia might experience a sales boom because of it. The future could very well bring a swing back. Imagine you could run a stack of Mac minis that replaced your monthly Claude code bill. Might pay for itself in 6mo (this doesn’t exist yet but it theoretically could happen)

kouteiheika 7 hours ago | parent [-]

> Imagine you could run a stack of Mac minis that replaced your monthly Claude code bill. Might pay for itself in 6mo (this doesn’t exist yet but it theoretically could happen)

You don't have to imagine. You can, today, with a few (major) caveats: you'll only match Claude from roughly ~6 months ago (open-weight models roughly lag behind the frontier by ~half a year), and you'd need to buy a couple of RTX 6000 Pros (each one is ~$10k).

Technically you could also do this with Macs (due to their unified RAM), but the speed won't be great so it'd be unusable.

sib 5 hours ago | parent | prev | next [-]

Wonderful!

I wish I were in that situation, but I find myself able to use lots more compute than I have. And it seems like many others feel the same.

raw_anon_1111 8 hours ago | parent | prev | next [-]

We have data, people are buying phones in aggregate about every 2.5 - 3 years. Especially in the US where almost no one pays for a phone outright

ai-x 7 hours ago | parent | prev [-]

You are anecdote, not data.

Data is saying demand >>>>> supply.