Remix.run Logo
mstaoru 4 hours ago

I periodically try to run these models on my MBP M3 Max 128G (which I bought with a mind to run local AI). I have a certain deep research question (in a field that is deeply familiar to me) that I ask when I want to gauge model's knowledge.

So far Opus 4.6 and Gemini Pro are very satisfactory, producing great answers fairly fast. Gemini is very fast at 30-50 sec, Opus is very detailed and comes at about 2-3 minutes.

Today I ran the question against local qwen3.5:35b-a3b - it puffed for 45 (!) minutes, produced a very generic answer with errors, and made my laptop sound like it's going to take off any moment.

Wonder what am I doing wrong?.. How am I supposed to use this for any agentic coding on a large enough codebase? It will take days (and a 3M Peltor X5A) to produce anything useful.

Paddyz 4 minutes ago | parent | next [-]

The 35b-a3b model is misleading in its naming - it's a MoE with only 3B active parameters per forward pass. You're essentially running a 3B-class model for inference quality while paying the memory cost of loading 35B parameters. That's why it feels so much worse than Opus or Gemini, which are likely 10-100x larger in effective compute per token.

For your M3 Max 128G setup, try Qwen3.5-122B-A10B with a 4-bit quantization instead (should fit in ~50-60GB). 10B active params is a massive step up from 3B and you'll actually see the quality difference people are talking about. MLX versions specifically optimized for Apple Silicon will also give you noticeably better tok/s than running through ollama.

The general rule I've settled on: MoE models with <8B active params are great for structured tasks (reformatting, classification, simple completions) but fall apart on anything requiring deep reasoning or domain knowledge. For your research question use case, you want either a dense 27B+ model or a MoE with 10B+ active params.

lm28469 4 hours ago | parent | prev | next [-]

> Wonder what am I doing wrong?

You're comparing 100b parameters open models running on a consumer laptop VS private models with at the very least 1t parameters running on racks of bleeding edge professional gpus

Local agentic coding is closer to "shit me the boiler plate for an android app" not "deep research questions", especially on your machine

vlovich123 4 hours ago | parent | next [-]

The hardware difference explains runtime performance differences, not task performance.

Speculation is that the frontier models are all below 200B parameters but a 2x size difference wouldn’t fully explain task performance differences

nl 12 minutes ago | parent | next [-]

> Speculation is that the frontier models are all below 200B parameters

Some versions of some the models are around that size, which you might hit for example with the ChatGPT auto-router.

But the frontier models are all over 1T parameters. Source: watch interview with people who have left one of the big three labs and now work at the Chinese labs and are talking about how to train 1T+ models.

NamlchakKhandro an hour ago | parent | prev | next [-]

> The hardware difference explains runtime performance differences, not task performance.

Yes it does.

ses1984 4 hours ago | parent | prev [-]

Who would have thought ai labs with billions upon billions of r&d budget would have better models than a free alternative.

3 hours ago | parent [-]
[deleted]
delaminator 4 hours ago | parent | prev [-]

Looks at the headline: Qwen3.5 122B and 35B models offer Sonnet 4.5 performance on local computers

lm28469 4 hours ago | parent [-]

Yes and Devstral 2 24b q4 is supposed to be 90% as good but it can't even reliably write to a file on my machine.

There are the benchmarks, the promises, and what everybody can try at home

8note 3 hours ago | parent [-]

maybe a harness problem?

aspenmartin 4 hours ago | parent | prev | next [-]

Well Opus and Gemini are probably running on multiple H200 equivalents, maybe multiple hundreds of thousands of dollars of inference equipment. Local models are inherently inferior; even the best Mac that money can buy will never hold a candle to latest generation Nvidia inference hardware, and the local models, even the largest, are still not quite at the frontier. The ones you can plausibly run on a laptop (where "plausible" really is "45 minutes and making my laptop sound like it is going to take off at any moment". Like they said -- you're getting sonnet 4.5 performance which is 2 generations ago; speaking from experience opus 4.6 is night and day compared to sonnet 4.5

zozbot234 4 hours ago | parent [-]

> Well Opus and Gemini are probably running on multiple H200 equivalents, maybe multiple hundreds of thousands of dollars of inference equipment.

But if you've got that kind of equipment, you aren't using it to support a single user. It gets the best utilization by running very large batches with massive parallelism across GPUs, so you're going to do that. There is such a thing as a useful middle ground. that may not give you the absolute best in performance but will be found broadly acceptable and still be quite viable for a home lab.

aspenmartin 3 hours ago | parent [-]

Batching helps with efficiency but you can’t fit opus into anything less than hundreds of thousands of dollars in equipment

Local models are more than a useful middle ground they are essential and will never go away, I was just addressing the OPs question about why he observed the difference he did. One is an API call to the worlds most advanced compute infrastructure and another is running on a $500 CPU.

Lots of uses for small, medium, and larger models they all have important places!!

wolvoleo 2 hours ago | parent | prev | next [-]

Well first of all you're running a long intense task on a thermally constrained machine. Your MacBook Pro is optimised for portability and battery life, not max performance under load. And apple's obsession with thinness overrules thermal performance for them. Short peaks will be ok but a 45 minute task will thoroughly saturate the cooling system.

Even on servers this can happen. At work we have a 2U sized server with two 250W class GPUs. And I found that by pinning the case fans at 100% I can get 30% more performance out of GPU tasks which translates to several days faster for our usecase. It does mean I can literally hear the fans screaming in the hallway outside the equipment room but ok lol. Who cares. But a laptop just can't compare.

Something with a desktop GPU or even better something with HBM3 would run much better. Local models get slow when you use a ton of context and the memory bandwidth of a MacBook Pro while better than a pc is still not amazing.

And yeah the heaviest tasks are not great on local models. I tend to run the low hanging fruit locally and the stuff where I really need the best in the cloud. I don't agree local models are on par, however I don't think they really need to be for a lot of tasks.

pamcake 2 hours ago | parent [-]

To your point, one can get a great performance boost by propping the laptop onto a roost-like stand in front of a large fan. Nothing like a cooling system actually built for sustained load but still.

__mharrison__ 4 hours ago | parent | prev | next [-]

Were you using mlx-lm? I've had good performance with that on Macs. (Sadly, the lead developer just left Apple.)

Admittedly, I haven't tried these models on my Mac, but I have on my DGX Spark, and they ran fine. I didn't see the slowdown you're mentioning.

zozbot234 4 hours ago | parent | prev | next [-]

Running local AI models on a laptop is a weird choice. The Mini and especially the Studio form factor will have better cooling, lower prices for comparable specs and a much higher ceiling in performance and memory capacity.

stavros 4 hours ago | parent | next [-]

I can never see the point, though. Performance isn't anywhere near Opus, and even that gets confused following instructions or making tool calls in demanding scenarios. Open weights models are just light years behind.

I really, really want open weights models to be great, but I've been disappointed with them. I don't even run them locally, I try them from providers, but they're never as good as even the current Sonnet.

vunderba 3 hours ago | parent | next [-]

I can't speak to using local models as agentic coding assistants, but I have a headless 128GB RAM machine serving llama.cpp with a number of local models that I use on a daily basis.

- Qwen3-VL picks up new images in a NAS, auto captions and adds the text descriptions as a hidden EXIF layer into the image, which is used for fast search and organization in conjunction with a Qdrant vector database.

- Gemma3:27b is used for personal translation work (mostly English and Chinese).

- Llama3.1 spins up for sentiment analysis on text.

stavros 3 hours ago | parent [-]

Ah yeah, self-contained tasks like these are ideal, true. I'm more using it for coding, or for running a personal assistant, or for doing research, where open weights models aren't as strong yet.

vunderba 2 hours ago | parent [-]

Understood. Research would make me especially leery; I’d be afraid of losing any potential gains as I'd feel compelled to always go and validate its claims (though I suppose you could mitigate it a little bit with search engine tooling like Kagi's MCP system).

andoando 4 hours ago | parent | prev | next [-]

They're great for some product use cases where you dont need frontier models.

stavros 4 hours ago | parent [-]

Yeah, for sure, I just don't have many of those. For example, the only use I have for Haiku is for summarizing webpages, or Sonnet for coding something after Opus produces a very detailed plan.

Maybe I should try local models for home automation, Qwen must be great at that.

lm28469 4 hours ago | parent | prev [-]

They're like 6 months away on most benchmarks, people already claimed coding wad solved 6 months ago, so which is it? The current version is the baseline that solves everything but as soon as the new version is out it becomes utter trash and barely usable

zozbot234 4 hours ago | parent | next [-]

That's very large models at full quantization though. Stuff that will crawl even on a decent homelab, despite being largely MoE based and even quantization-aware, hence reducing the amount and size of active parameters.

stavros 4 hours ago | parent | prev [-]

That's just a straw man. Each frontier model version is better than the previous one, and I use it for harder and harder things, so I have very little use for a version that's six months behind. Maybe for simple scripts they're great, but for a personal assistant bot, even Opus 4.6 isn't as good as I'd like.

satvikpendem 4 hours ago | parent | prev | next [-]

I can take a laptop on the train.

wat10000 3 hours ago | parent | prev [-]

I have a laptop already, so that's what I'm going to use.

notreallya 4 hours ago | parent | prev | next [-]

Sonnet 4.5 level isn't Opus 4.6 level, simple as

rienko 4 hours ago | parent | prev | next [-]

use a larger model like Qwen3.5-122B-A10B quantized to 4/5/6 bits depending on how much context you desire, MLX versions if you want best tok/s on Mac HW.

if you are able to run something like mlx-community/MiniMax-M2.5-3bit (~100gb), my guess if the results are much better than 35b-a3b.

culi 4 hours ago | parent | prev | next [-]

Well you can't run Gemini Pro or Opus 4.6 locally so are you comparing a locally run model to cloud platforms?

furyofantares 4 hours ago | parent | prev | next [-]

Can you try asking Sonnet 4.5 the same question, since that is what this model is claimed to be on par with?

andxor 3 hours ago | parent | prev | next [-]

You're not doing anything wrong. The Chinese models are not as good as advertised. Surprise surprise!

CamperBob2 3 hours ago | parent | prev [-]

Try the 27B dense model. It will likely do much better than the 35b MoE with only 3B active experts.

Also, performance on research-y questions isn't always a good indicator of how the model will do for code generation or agent orchestration.