| ▲ | jareds 6 hours ago | |
What's the current situation for coding with Local LLM's on decent hardware? I have an M3 Max with 64 gb of ram and am thinking I should start looking at Ollama and Opencode? Is this a useful stack for smaller personal projects? | ||
| ▲ | speedgoose 4 hours ago | parent | next [-] | |
It’s getting there. You could give a try with qwen 3.6. It’s worth paying for better models in the cloud, but local models are now better than nothing. | ||
| ▲ | pohl 5 hours ago | parent | prev | next [-] | |
One nice development recently was ollama's support for MLX optimization on Mac hardware. It's not obvious how to know you're using a model that works with it, yet, so it's rough around the edges. | ||
| ▲ | satvikpendem 3 hours ago | parent | prev [-] | |
Use llama.cpp or better yet Unsloth Studio | ||