Remix.run Logo
edude03 5 hours ago

So there are two ways to look at this - both hinge on how your define "consumer":

1) We haven't managed to distill models enough to get good enough performance to fit in the typical gaming desktop (say, 7B-24b class models). Even then though - most consumers don't have high end desktops, so even a 3060 class GPU requirement would exclude a lot of people.

2) Nothing is stopping you/anyone from buying 24ish 5090s (a consumer hardware product) to get the required ~600GB-1TB of VRAM to run unquantized deepseek except time/money/know how. Sure, it's unreasonably expensive but it's not like labs are conspiring to prevent people from running these models, it's just expensive for everyone and the common person doesn't have the funding to get into it.

regularfry 5 hours ago | parent [-]

> 1) We haven't managed to distill models enough to get good enough performance to fit in the typical gaming desktop (say, 7B-24b class models).

That really depends on what "good enough" means. Qwen3-30b runs absolutely fine at q4 on a 24GB card, although that's also stretching "typical gaming desktop". It's competent as a code completion or aider-type coding agent model in that scenario.

But really we need both. Yes it would be nice to have things targeted to our own particular niche, but there are only so many labs cranking these things out. Small models will only get better from here.