| ▲ | gcr 3 days ago | ||||||||||||||||||||||||||||||||||||||||
For new folks, you can get a local code agent running on your Mac like this: 1. $ npm install -g @openai/codex 2. $ brew install ollama; ollama serve 3. $ ollama pull gpt-oss:20b 4. $ codex --oss -m gpt-oss:20b This runs locally without Internet. Idk if there’s telemetry for codex, but you should be able to turn that off if so. You need an M1 Mac or better with at least 24GB of GPU memory. The model is pretty big, about 16GB of disk space in ~/.ollama Be careful - the 120b model is 1.5× better than this 20b variant, but takes 5× higher requirements.  | |||||||||||||||||||||||||||||||||||||||||
| ▲ | windexh8er 3 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
I've been really impressed by OpenCode [0]. The limitations of all the frontier TUI is removed and it is feature complete and performant compared to Codex or Claude Code.  | |||||||||||||||||||||||||||||||||||||||||
  | |||||||||||||||||||||||||||||||||||||||||
| ▲ | nickthegreek 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
have you been able to build or reiterate anything of value using just 20b to vibe code?  | |||||||||||||||||||||||||||||||||||||||||
| ▲ | abacadaba 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
As much as I've been using llms via api all day every day, being able to run it locally on my mba and talk to my laptop still feels like magic  | |||||||||||||||||||||||||||||||||||||||||
| ▲ | giancarlostoro 3 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
LM Studio is even easier, and things like JetBrains IDEs will sync to LM Studio, same with Zed.  | |||||||||||||||||||||||||||||||||||||||||