| ▲ | operatingthetan 3 hours ago | |||||||||||||
An M4 mini is overkill just to run OpenClaw. I'm running it on a Pentium J5005 and it's running 20 other services in Docker. I think the main thing was many wanted it to be able to access iMessage. I think people dream of also using the mac to run the LLM but the 16gb ones don't have enough ram. | ||||||||||||||
| ▲ | apexalpha an hour ago | parent | next [-] | |||||||||||||
When they say 'due to openclaw' they refer to running AI models that openclaw uses, not to openclaw itself. | ||||||||||||||
| ▲ | hparadiz 3 hours ago | parent | prev | next [-] | |||||||||||||
The shortage is for the 512, 256, and 128 models. | ||||||||||||||
| ||||||||||||||
| ▲ | amelius 2 hours ago | parent | prev [-] | |||||||||||||
People are running openclown on microcontrollers. | ||||||||||||||