▲ | reddit_clone 3 days ago | |
I am running Ollama with 'SimonPu/Qwen3-Coder:30B-Instruct_Q4_K_XL' on a M4 pro MBP with 48 GB of memory. From Emacs/gptel, it seems pretty fast. I have never used the proper hosted LLMS, so I don't have a direct comparison. But the above LLM answered coding questions in a handful of seconds. The cost of memory (and disk) upgrades in apple machines is exorbitant. |