| ▲ | literalAardvark 7 hours ago | ||||||||||||||||||||||
Apple has a 10-18% market share for laptops. That's significant but it certainly isn't "most". Most laptops can run at best a 7-14b model, even if you buy one with a high spec graphics chip. These are not useful models unless you're writing spam. Most desktops have a decent amount of system memory but that can't be used for running LLMs at a useful speed, especially since the stuff you could run in 32-64GB RAM would need lots of interaction and hand holding. And that's for the easy part, inference. Training is much more expensive. | |||||||||||||||||||||||
| ▲ | seanmcdirmid 5 hours ago | parent | next [-] | ||||||||||||||||||||||
A Max cpu can run 30b models quantized, and definitely has the RAM to fit them in memory. The normal and pro CPUs will be compute/bandwidth limited. Of course, the Ultra CPU is even better than the Max, but they don't come in laptops yet. | |||||||||||||||||||||||
| ▲ | nunodonato 6 hours ago | parent | prev [-] | ||||||||||||||||||||||
my laptop is 4 years old. I only have 6Gb VRam. I run, mostly, 4b and 8b models. They are extremely useful in a variety of situations. Just because you can't replicate what you do in chatgpt doesn't mean they don't have their use cases. It seems to me you know very little about what these models can do. Not to speak of trained models for specific use cases, or even smaller models like functiongemma or TTS/ASR models. (btw, I've trained models using my 6Gb VRAM too) | |||||||||||||||||||||||
| |||||||||||||||||||||||