Remix.run Logo
literalAardvark 7 hours ago

Apple has a 10-18% market share for laptops. That's significant but it certainly isn't "most".

Most laptops can run at best a 7-14b model, even if you buy one with a high spec graphics chip. These are not useful models unless you're writing spam.

Most desktops have a decent amount of system memory but that can't be used for running LLMs at a useful speed, especially since the stuff you could run in 32-64GB RAM would need lots of interaction and hand holding.

And that's for the easy part, inference. Training is much more expensive.

seanmcdirmid 5 hours ago | parent | next [-]

A Max cpu can run 30b models quantized, and definitely has the RAM to fit them in memory. The normal and pro CPUs will be compute/bandwidth limited. Of course, the Ultra CPU is even better than the Max, but they don't come in laptops yet.

nunodonato 6 hours ago | parent | prev [-]

my laptop is 4 years old. I only have 6Gb VRam. I run, mostly, 4b and 8b models. They are extremely useful in a variety of situations. Just because you can't replicate what you do in chatgpt doesn't mean they don't have their use cases. It seems to me you know very little about what these models can do. Not to speak of trained models for specific use cases, or even smaller models like functiongemma or TTS/ASR models. (btw, I've trained models using my 6Gb VRAM too)

reactordev 3 hours ago | parent | next [-]

I’ll chime in and say I run LM Studio on my 2021 MacBook Pro M1 with no issues.

I have 16GB ram. I use unsloth quantized models like qwen3 and gpt-oss. I have some MCP servers like Context7 and Fetch that make sure the models have up to date information. I use continue.dev in VSCode or OpenCode Agent with LM Studio and write C++ code against Vulkan.

It’s more than capable. Is it fast? Not necessarily. Does it get stuck? Sometimes. Does it keep getting better? With every model release on huggingface.

Total monthly cost: $0

literalAardvark 3 hours ago | parent | prev [-]

A few examples of useful tasks would be appreciated. I do suffer from a sad lack of imagination.

nunodonato 3 hours ago | parent [-]

I suggest taking a look at /r/localLLaMa and see all sorts of cool things people do with small models.