| ▲ | labcomputer 2 hours ago | |||||||
I think the mini is just a better value, all things considered: First, a 16GB RPi that is in stock and you can actually buy seems to run about $220. Then you need a case, a power supply (they're sensitive, not any USB brick will do), an NVMe. By the time it's all said and done, you're looking at close to $400. I know HN likes to quote the starting price for the 1GB model and assume that everyone has spare NVMe sticks and RPi cases lying around, but $400 is the realistic price for most users who want to run LLMs. Second, most of the time you can find Minis on sale for $500 or less. So the price difference is less than $100 for something that comes working out of the box and you don't have to fuss with. Then you have to consider the ecosystem: * Accelerated PyTorch works out of the box by simply changing the device from 'cuda' to 'mps'. In the real world, an M5 mini will give you a decent fraction of V100 performance (For reference, M2 Max is about 1/3 the speed of a V100, real-world). * For less technical users, Ollama just works. It has OpenAI and Anthropic APIs out of the box, so you can point ClaudeCode or OpenCode at it. All of this can be set up from the GUI. * Apple does a shockingly good job of reducing power consumption, especially idle power consumption. It wouldn't surprise me if a Pi5 has 2x the idle draw of a Mini M5. That matters for a computer running 24/7. | ||||||||
| ▲ | weikju an hour ago | parent [-] | |||||||
> In the real world, an M5 mini will give you a decent fraction of V100 performance In the real world, the M5 Mini is not yet on the market. Check your LLM/LLM facts ;) | ||||||||
| ||||||||