Remix.run Logo
azuanrb 8 hours ago

You still need ridiculously high spec hardware, and at Apple’s prices, that isn’t cheap. Even if you can afford it (most won't), the local models you can run are still limited and they still underperform. It’s much cheaper to pay for a cloud solution and get significantly better result. In my opinion, the article is right. We need a better way to run LLMs locally.

onion2k 7 hours ago | parent | next [-]

You still need ridiculously high spec hardware, and at Apple’s prices, that isn’t cheap.

You can easily run models like Mistral and Stable Diffusion in Ollama and Draw Things, and you can run newer models like Devstral (the MLX version) and Z Image Turbo with a little effort using LM Studio and Comfyui. It isn't as fast as using a good nVidia GPU or a cloud GPU but it's certainly good enough to play around with and learn more about it. I've written a bunch of apps that give me a browser UI talking to an API that's provided by an app running a model locally and it works perfectly well. I did that on an 8GB M1 for 18 months and then upgraded to a 24GB M4 Pro recently. I still have the M1 on my network for doing AI things in the background.

liuliu 4 hours ago | parent [-]

You can run newer models like Z Image Turbo or FLUX.2 [dev] using Draw Things with no effort too.

whitehexagon 6 hours ago | parent | prev | next [-]

I was pleasantly surprised at the speed and power of my second hand M1 Pro 32GB running Asahi & Qwen3:32B. It does all I need, and I dont mind the reading pace output, although I'd be tempted by M2 Ultra if the secondhand market hadn't also exploded with the recent RAM market manipulations.

Anyway, I'm on a mission to have no subscriptions in the New Year. Plus it feels wrong to be contributing towards my own irrelevance (GAI).

almosthere 7 hours ago | parent | prev | next [-]

749 for an M4 air at Amazon right now

tossandthrow 7 hours ago | parent [-]

Try running anything interesting on these 8gb of ram.

You need 96gb or 128gb to do non trivial things. That is not yet 749 usd

badc0ffee 7 hours ago | parent | next [-]

Fair enough, but they start at 16GB nowadays.

kylec 5 hours ago | parent | prev | next [-]

The M4 starts with 16GB, though that can also be tight for local LLMs. You can get one with 24GB for $1149 right now though, which is good value.

jki275 6 hours ago | parent | prev [-]

64gb is fine.

kibwen 6 hours ago | parent | next [-]

This subthread is about the Macbook Air, which tops out at 32 GB, and can't be upgraded further.

While browsing the Apple website, it looks like the cheapest Macbook with 64 GB of RAM is the Macbook Pro M4 Max with 40-core GPU, which starts at $3,899, a.k.a. more than five times more expensive than the price quoted above.

seanmcdirmid 5 hours ago | parent | prev [-]

if you are going for 64GB, you need at least a Max CPU or you will be bandwidth/GPU limited.

jki275 6 hours ago | parent | prev [-]

I bought my M1 Max w/ 64gb of ram used. It's not that expensive.

Yes, the models it can run do not perform like chatgpt or claude 4.5, but they're still very useful.