| ▲ | hebejebelus 4 days ago | |||||||
An interesting thought experiment - a fully local, off-grid, off-network LLM device. Solar or wind or what have you. I suppose the Mac Studio route is a good option here, I think Apple make the most energy efficient high-memory options. Back of the napkin indicates it’s possible, just a high up front cost. Interesting to imagine a somewhat catastrophe-resilient LLM device… | ||||||||
| ▲ | evilduck 3 days ago | parent | next [-] | |||||||
Macs would be the most power efficient with faster memory but an AI Max 395+ based system would probably be the most cost efficient right now. A Framework Desktop with 128GB of shared RAM only pulls 400W (and could be underclocked) and is cheaper by enough that you could buy it plus 400W of solar panels and a decently large battery for less than a Mac Studio with 128GB of RAM. Unfortunately the power efficiency win is more expensive than just buying more power generation and storage ability. | ||||||||
| ||||||||
| ▲ | ImPrajyoth 3 days ago | parent | prev [-] | |||||||
That is the endgame. I think we are moving toward a bilayered compute model: The Cloud: For massive reasoning. The Local Edge: A small, resilient model that lives on-device and handles the OS loop, privacy, and immediate context. BrainKernel is my attempt to prototype that Local Edge layer. Its messy right now, but I think the OS of 2030 will definitely have a local LLM baked into the kernel. | ||||||||
| ||||||||