| ▲ | Hamuko 7 hours ago |
| I've tried to use a local LLM on an M4 Pro machine and it's quite painful. Not surprised that people into LLMs would pay for tokens instead of trying to force their poor MacBooks to do it. |
|
| ▲ | atwrk 7 hours ago | parent | next [-] |
| Local LLM inference is all about memory bandwidth, and an M4 pro only has about the same as a Strix Halo or DGX Spark. That's why the older ultras are popular with the local LLM crowd. |
|
| ▲ | usagisushi 5 hours ago | parent | prev | next [-] |
| Qwen 3.5 35B-A3B and 27B have changed the game for me. I expect we'll see something comparable to Sonnet 4.6 running locally sometime this year. |
| |
| ▲ | prettyblocks 19 minutes ago | parent [-] | | Could be, but it likely won't be able to support the massive context window required for performance on par with sonnet 4.6 |
|
|
| ▲ | freeone3000 7 hours ago | parent | prev | next [-] |
| I’m super happy with it for embedding, image recog, and semantic video segmentation tasks. |
|
| ▲ | giancarlostoro 7 hours ago | parent | prev | next [-] |
| What are the other specs and how's your setup look? You need a minimum of 24GB of RAM for it to run 16GB or less models. |
| |
| ▲ | jazzyjackson 5 hours ago | parent | next [-] | | Tokens per second is abysmal no matter how much ram you have | | |
| ▲ | giancarlostoro 3 hours ago | parent [-] | | Some models run worse than others but I have gotten reasonable performance on my M4 Pro with 24 GB of RAM |
| |
| ▲ | SV_BubbleTime 7 hours ago | parent | prev | next [-] | | This is typically true. And while it is stupid slow, you can run models of hard drive or swap space. You wouldn’t do it normally, but it can be done to check an answer in one model versus another. | |
| ▲ | Hamuko 7 hours ago | parent | prev [-] | | 48 GB MacBook Pro. All of the models I've tried have been slow and also offered terrible results. | | |
| ▲ | giancarlostoro 3 hours ago | parent [-] | | Try a software called TG Pro lets you override fan settings, Apple likes to let your Mac burn in an inferno before the fans kick in. It gives me more consistent throughput. I have less RAM than you and I can run some smaller models just fine, with reasonable performance. GPT20b was one. |
|
|
|
| ▲ | andoando 6 hours ago | parent | prev [-] |
| Local LLMs are useful for stuff like tool calling |
| |
| ▲ | renewiltord 3 hours ago | parent [-] | | What models are you using? I’ve found that SOTA Claudes outperform even gpt-5.2 so hard on this that it’s cheaper to just use Sonnet because num output tokens to solve problem is so much lower that TCO is lower. I’m in SF where home power is 54¢/kWh. Sonnet is so fast too. GPT-5.2 needs reasoning tuned up to get tool calling reliable and Qwen3 Coder Next wasn’t close. I haven’t tried Qwen3.5-A3B. Hearing rave reviews though. If you’re using successfully some model knowing that alone is very helpful to me. |
|