Remix.run Logo
logicallee 2 days ago

Do any of you use this as a replacement for Claude Code? For example, you might use it with openclaw. I have a 24 GB integrated RAM Mac Mini M4 I currently run Claude Code on, do you think I can replace it with OpenClaw and one of these models?

Schekin a day ago | parent | next [-]

This matches my experience.

The weights usually arrive before the runtime stack fully catches up.

I tried Gemma locally on Apple Silicon yesterday — promising model, but Ollama felt like more of a bottleneck than the model itself.

I had noticeably better raw performance with mistralrs (i find it on reddit then github), but the coding/tool-use workflow felt weaker. So the tradeoff wasn’t really model quality — it was runtime speed vs workflow maturity.

FullyFunctional a day ago | parent | prev | next [-]

Ollama made it trivial for me to use claude code on my 48GB MacMini M4P with any model, including the Qwen3.5…nvfp4 which was so far the best I’ve tried. Once Ollama has a Mac friendly version of Gemma4 I’ll jump right on board (and do educate me if I’m missing something).

ar_turnbull 2 days ago | parent | prev | next [-]

Following as I also don’t love the idea of double paying anthropic for my usage plan and API credits to feed my pet lobster.

hacker_homie a day ago | parent | prev | next [-]

Honestly for that [Qwen3-Coder-Next-GGUF](https://huggingface.co/unsloth/Qwen3-Coder-Next-GGUF)

still seems to be the best in class.

I am testing the Gemma4 now I will update this comment with what I find.

downrightmike a day ago | parent | prev [-]

Did you try it?

logicallee a day ago | parent [-]

yes, I've now I tried both the 20 GB version (gemma4:31b) which is the largest on the page[1], and the ~10 GB version (gemma4:e4b). The 20 GB version was rather slow even when fully loaded and with some RAM still left free, and the 10 GB version was speedy. I installed openclaw but couldn't get it to act as an agent the way Claude Code does. If you'd like to see a video of how both of them perform with almost nothing else running, on a Mac Mini M4 with 24 GB of RAM, you can see one here (I just recorded it):[2]

[1] https://ollama.com/library/gemma4

[2] https://www.youtube.com/live/G5OVcKO70ns

tr33house a day ago | parent [-]

Thank you for the video. It was super helpful. the 20g version was clearly struggling but the 10g version was flying by. I think it was probably virtualized memory pages that were actually on disk causing the issue. Perhaps that and the memory compression.