| ▲ | logicallee 2 days ago | ||||||||||||||||
Do any of you use this as a replacement for Claude Code? For example, you might use it with openclaw. I have a 24 GB integrated RAM Mac Mini M4 I currently run Claude Code on, do you think I can replace it with OpenClaw and one of these models? | |||||||||||||||||
| ▲ | Schekin a day ago | parent | next [-] | ||||||||||||||||
This matches my experience. The weights usually arrive before the runtime stack fully catches up. I tried Gemma locally on Apple Silicon yesterday — promising model, but Ollama felt like more of a bottleneck than the model itself. I had noticeably better raw performance with mistralrs (i find it on reddit then github), but the coding/tool-use workflow felt weaker. So the tradeoff wasn’t really model quality — it was runtime speed vs workflow maturity. | |||||||||||||||||
| ▲ | FullyFunctional a day ago | parent | prev | next [-] | ||||||||||||||||
Ollama made it trivial for me to use claude code on my 48GB MacMini M4P with any model, including the Qwen3.5…nvfp4 which was so far the best I’ve tried. Once Ollama has a Mac friendly version of Gemma4 I’ll jump right on board (and do educate me if I’m missing something). | |||||||||||||||||
| ▲ | ar_turnbull 2 days ago | parent | prev | next [-] | ||||||||||||||||
Following as I also don’t love the idea of double paying anthropic for my usage plan and API credits to feed my pet lobster. | |||||||||||||||||
| ▲ | hacker_homie a day ago | parent | prev | next [-] | ||||||||||||||||
Honestly for that [Qwen3-Coder-Next-GGUF](https://huggingface.co/unsloth/Qwen3-Coder-Next-GGUF) still seems to be the best in class. I am testing the Gemma4 now I will update this comment with what I find. | |||||||||||||||||
| ▲ | downrightmike a day ago | parent | prev [-] | ||||||||||||||||
Did you try it? | |||||||||||||||||
| |||||||||||||||||