| ▲ | wiradikusuma 3 hours ago | ||||||||||||||||||||||||||||||||||
I guess the sudden demand is due to OpenClaw? But most people will still use cloud LLMs, right? Anything particular with the Mac Mini that non-Mac lack? | |||||||||||||||||||||||||||||||||||
| ▲ | zarzavat 2 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
Not just OpenClaw. The Mac mini is just stupidly good value for a desktop computer, and the RAM prices have only enhanced its appeal. Apple doesn't make much of a fuss about it but their chip performance is laughably ahead of the other chipmakers. The Mac Mini M4 gets a score of 3788 in Geekbench[0]. The top of the PC processor chart is 3395[1]. It's not even Apple's latest chip! PC processors can only keep up by adding more cores, but real world performance in many workloads is enhanced by having a smaller number of higher performance cores. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | ashdksnndck 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Mac mini has first-class access to iCloud, photos, iMessage etc. So if you are deep in the Apple ecosystem you might prefer it for that reason. I have a windows gaming desktop that I could use as a server for openclaw/cowork but I realized I simply don’t trust that system enough to give it access to all the personal stuff I’m giving to the AI. I trust Anthropic and Apple. I don’t trust whatever junk is running on my gaming desktop. If you want to run local models, another advantage is Apple’s unified memory architecture. The biggest Mac mini has 64gb ram and Mac Studio has up to 512gb. Compare this little box to what monster Nvidia gpu system you would have to buy to get the same memory there. And how much your PG&E bill would go up. That doesn’t account for the shortage of basic $600 Mac minis though. | |||||||||||||||||||||||||||||||||||
| ▲ | operatingthetan 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
An M4 mini is overkill just to run OpenClaw. I'm running it on a Pentium J5005 and it's running 20 other services in Docker. I think the main thing was many wanted it to be able to access iMessage. I think people dream of also using the mac to run the LLM but the 16gb ones don't have enough ram. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | hparadiz 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
You can look up benchmarks. It's different depending on the model of Mac Mini and Model of LLM. The take away is that some of the Apple hardware hits a sweet spot for performance and price which may change in the future but for now it's causing a lot of demand so people can run inference without GPUs. Also Macs keep a lot of their resale value so you can use them for a while and then sell them for sometimes 80% of their original value. | |||||||||||||||||||||||||||||||||||
| ▲ | 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
| [deleted] | |||||||||||||||||||||||||||||||||||
| ▲ | chillfox 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Affordable ram! I recently bought one for my k3s cluster, and it was the cheapest 16g ram I could get by a decent margin. | |||||||||||||||||||||||||||||||||||
| ▲ | znpy 3 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
My understanding is that openclaw is only a factor, and a relatively minor one. Most likely the limiting factor is the crunch that chip companies are going through. | |||||||||||||||||||||||||||||||||||