Remix.run Logo
mark_l_watson 5 days ago

I bought a Mac Mini M2Pro 32G 18 months ago for $1900. It is sufficient to run good up to and including 40B local models that are quantized.

When local models don’t cut it, I like Gemini 2.5 flash/pro and gemini-cli.

There are a lot of good options for commercial APIs and for running local models. I suggest choosing a good local and a good commercial API, and spend more time building things than frequently trying to evaluate all the options.

criddell 5 days ago | parent | next [-]

Are there any particular sources you found helpful to get started?

It's been a while since I checked out Mini prices. Today, $2400 buys an M4 Pro with all the cores, 64GB RAM, and 1TB storage. That's pleasantly surprising...

mark_l_watson 5 days ago | parent [-]

You can read my book on local models with Ollama free online: https://leanpub.com/ollama/read

criddell 5 days ago | parent [-]

Awesome, thanks!

5 days ago | parent | prev [-]
[deleted]