Remix.run Logo
cmclaughlin 2 hours ago

I also expect local LLMs to catch up to the cloud providers.

I spent last weekend experimenting with Ollama and LM studio. I was impressed at how good Qwen3-Coder is. Not as good as Claude, but close - maybe even better in some ways.

As I understand it, the latest Macs are good for local LLMs due to their unified memory. 32GB of RAM in one of the newer M-series seems to be the "sweet spot" for price versus performance.