| ▲ | jrmg 4 hours ago | |
Is there a reliable guide somewhere to setting up local AI for coding (please don’t say ‘just Google it’ - that just results in a morass of AI slop/SEO pages with out of date, non-self-consistent, incorrect or impossible instructions). I’d like to be able to use a local model (which one?) to power Copilot in vscode, and run coding agent(s) (not general purpose OpenClaw-like agents) on my M2 MacBook. I know it’ll be slow. I suspect this is actually fairly easy to set up - if you know how. | ||
| ▲ | randusername a few seconds ago | parent | next [-] | |
Personally I'd start with llamafile [0] then move to compiling your own llama.cpp. It's not as bad as you might think to compile llama.cpp for your target architecture and spin up an OpenAI compatible API endpoint. It even downloads the models for you. | ||
| ▲ | AstroBen 4 hours ago | parent | prev | next [-] | |
Ollama or LM Studio are very simple to setup. You're probably not going to get anything working well as an agent on an M2 MacBook, but smaller models do surprisingly well for focused autocomplete. Maybe the Qwen3.5 9B model would run decently on your system? | ||
| ▲ | chatmasta 2 hours ago | parent | prev [-] | |
Any time I google something on this topic, the results are useful but also out of date, because this space is moving so absurdly fast. | ||