| ▲ | y42 2 hours ago | |
Having an M3 with 36 GByte I was under the assumption, that I can utilize like Qwen and similar models. It's quite easy to set up, you can use pi or hermes for CLI access, or "Continue" to use it in VS Code. You can choose between omlx, Ollama and even more to run the model itself. It's no rocket science, but the results are also not satisfying. I use it occassionally for very easy tasks, fix typos or update meta data in blog posts. So yeah, it improves productivity. But coding-wise it's far away from Codex, Claude et al. | ||