Remix.run Logo
andai 7 hours ago

So I'm hearing a lot of people running LLMs on Apple hardware. But is there actually anything useful you can run? Does it run at a usable speed? And is it worth the cost? Because the last time I checked the answer to all three questions appeared to be no.

Though maybe it depends on what you're doing? (Although if you're doing something simple like embeddings, then you don't need the Apple hardware in the first place.)

anonzzzies 2 hours ago | parent | next [-]

I was sitting in an airplane next to a guy on a MacBook pro something who was coding in cursor with a local llm. We got talking and he said there are obviously differences but for his style of 'English coding' (he described basically what code to write/files to change but in english, but more sloppy than code obviously otherwise he would just code) it works really well. And indeed that's what he could demo. The model (which was the OSS gpt i believe) did pretty well in his nextjs project and fast too.

sueders101 2 hours ago | parent | prev | next [-]

I've tried out gpt-oss:20b on a MacBook Air (via Ollama) with 24GB of RAM. In my experience it's output is comparable to what you'd get out of older models and the openAI benchmarks seem accurate https://openai.com/index/introducing-gpt-oss/ . Definitely a usable speed. Not instant, but ~5 tokens per second of output if I had to guess.

fhsm 6 hours ago | parent | prev | next [-]

This paper shows a use case running on Apple silicon that’s theoretically valuable:

https://pmc.ncbi.nlm.nih.gov/articles/PMC12067846/

Who cares if result is right / wrong etc as it will all be different in a year … just interesting to see a test of desktop class hardware go ok.

seanmcdirmid 5 hours ago | parent | prev | next [-]

I have an MBP Max M3 with 64GB of RAM, and I can run a lot at useful speed (LLMs run fine, diffusion image models run OK although not as fast as they would on a 3090). My laptop isn't typical though, it isn't a standard MBP with a normal or pro processor.

jki275 6 hours ago | parent | prev | next [-]

I can definitely write code with a local model like Devstral small or a quantized granite, or a quantized deep-seek on an M1 Max w/ 64gb of ram.

DANmode 7 hours ago | parent | prev [-]

Of course it depends what you’re doing.

Do you work offline often?

Essential.