Remix.run Logo
wiradikusuma 3 hours ago

I guess the sudden demand is due to OpenClaw? But most people will still use cloud LLMs, right? Anything particular with the Mac Mini that non-Mac lack?

zarzavat 2 hours ago | parent | next [-]

Not just OpenClaw. The Mac mini is just stupidly good value for a desktop computer, and the RAM prices have only enhanced its appeal.

Apple doesn't make much of a fuss about it but their chip performance is laughably ahead of the other chipmakers.

The Mac Mini M4 gets a score of 3788 in Geekbench[0]. The top of the PC processor chart is 3395[1]. It's not even Apple's latest chip!

PC processors can only keep up by adding more cores, but real world performance in many workloads is enhanced by having a smaller number of higher performance cores.

[0]: https://browser.geekbench.com/mac-benchmarks

[1]: https://browser.geekbench.com/processor-benchmarks

ffsm8 an hour ago | parent [-]

If you remove the Mac filter, its performance is not even in the top ten

Which is obvious if you spent more then half a microsecond thinking about it, because apple silicone barely draws any power - it's performance is fantastic in it's niche, which is squarely within what a home user cares about - but it's not leading on benchmark performance, because that's not what apple designed it for

The reason its coincidentally good for local ai inference is also just down to the fact the embedded GPU has shared memory access to the system VRAM. That means low performance/throughput but large memory.

Which is great for home use, but once again not gonna top charts.

zarzavat an hour ago | parent [-]

Which top 10 are you talking about? If you mean the top absolute geekbench scores, those are always with the assistance of cryogenic cooling.

ashdksnndck 2 hours ago | parent | prev | next [-]

Mac mini has first-class access to iCloud, photos, iMessage etc. So if you are deep in the Apple ecosystem you might prefer it for that reason. I have a windows gaming desktop that I could use as a server for openclaw/cowork but I realized I simply don’t trust that system enough to give it access to all the personal stuff I’m giving to the AI. I trust Anthropic and Apple. I don’t trust whatever junk is running on my gaming desktop.

If you want to run local models, another advantage is Apple’s unified memory architecture. The biggest Mac mini has 64gb ram and Mac Studio has up to 512gb. Compare this little box to what monster Nvidia gpu system you would have to buy to get the same memory there. And how much your PG&E bill would go up. That doesn’t account for the shortage of basic $600 Mac minis though.

operatingthetan 3 hours ago | parent | prev | next [-]

An M4 mini is overkill just to run OpenClaw. I'm running it on a Pentium J5005 and it's running 20 other services in Docker. I think the main thing was many wanted it to be able to access iMessage. I think people dream of also using the mac to run the LLM but the 16gb ones don't have enough ram.

apexalpha an hour ago | parent | next [-]

When they say 'due to openclaw' they refer to running AI models that openclaw uses, not to openclaw itself.

hparadiz 3 hours ago | parent | prev | next [-]

The shortage is for the 512, 256, and 128 models.

ashdksnndck 39 minutes ago | parent | next [-]

The basic 16GB Mac mini is also hard to buy. I bought one used not to save money but because I couldn’t find any store online with it in stock.

reverius42 2 hours ago | parent | prev [-]

Those are the ones that can run the LLMs. Not a coincidence.

amelius 2 hours ago | parent | prev [-]

People are running openclown on microcontrollers.

hparadiz 3 hours ago | parent | prev | next [-]

You can look up benchmarks. It's different depending on the model of Mac Mini and Model of LLM.

The take away is that some of the Apple hardware hits a sweet spot for performance and price which may change in the future but for now it's causing a lot of demand so people can run inference without GPUs.

Also Macs keep a lot of their resale value so you can use them for a while and then sell them for sometimes 80% of their original value.

2 hours ago | parent | prev | next [-]
[deleted]
chillfox 2 hours ago | parent | prev | next [-]

Affordable ram!

I recently bought one for my k3s cluster, and it was the cheapest 16g ram I could get by a decent margin.

znpy 3 hours ago | parent | prev [-]

My understanding is that openclaw is only a factor, and a relatively minor one.

Most likely the limiting factor is the crunch that chip companies are going through.