| ▲ | kridsdale3 5 hours ago |
| I can (barely, but sustainably) run Q3.5 397B on my Mac Studio with 256GB unified. It cost $10,000 but that's well within reach for most people who are here, I expect. |
|
| ▲ | qlm 5 hours ago | parent | next [-] |
| Hacker News moment |
|
| ▲ | toxik 5 hours ago | parent | prev | next [-] |
| $10k is well outside my budget for frivolous computer purchases. |
| |
| ▲ | zozbot234 an hour ago | parent | next [-] | | It would be plenty in-budget if the software part of local AI was a bit more full-featured than it is at present. I want stuff like SSD offload for cold expert weights and/or for saved/cached KV-context, dynamic context sizing, NPU use for prefill, distributed inference over the network, etc. etc. to all be things that just work for most users, without them having to set anything up in an overly error-prone way. The system should not just explode when someone tries to run something slightly larger; it should undergo graceful degradation and let them figure out where the reasonable limits are. | |
| ▲ | stefs 2 hours ago | parent | prev | next [-] | | yeah, but if you really really wanted to and/or your livelyhood depended on it, you probably could afford it. | |
| ▲ | bdangubic 5 hours ago | parent | prev [-] | | 99.97% of HN users are nodding… :) | | |
| ▲ | hparadiz 3 hours ago | parent [-] | | There are way too many good uses of these models for local that I fully expect a standard workstation 10 years from now to start at 128GB of RAM and have at least a workstation inference device. | | |
| ▲ | bdangubic 2 hours ago | parent [-] | | or if you believe a lot of HN crowd we are in AI bubble and in 10 years inference will be dirt cheap when all of this crashes and we have all this hardware in data centers and it won't make any sense to run monster workstations at home (I work 128GB M4 but not run inference, just too many electron apps running at the same time...) :) | | |
| ▲ | hparadiz 2 hours ago | parent [-] | | Inference will be dirt cheap for things like coding but you'll want much more compute for architectural planning, personal assistants with persistent real time "thinking / memory", as well as real time multimedia. I could put 10 M4s to work right now and it won't be enough for what I've been cooking. |
|
|
|
|
|
| ▲ | SlavikCA 5 hours ago | parent | prev | next [-] |
| I'm running it on my Intel Xeon W5 with 256GB of DDR5 and Nvidia 72GB VRAM. Paid $7-8k for this system. Probably cost twice as much now. Using UD-IQ4_NL quants. Getting 13 t/s. Using it with thinking disabled. |
|
| ▲ | kylehotchkiss 34 minutes ago | parent | prev | next [-] |
| you have proved my point |
|
| ▲ | rwmj 5 hours ago | parent | prev [-] |
| For some reason you were being downvoted but I enjoy hearing how people are running open weights models at home (NOT in the cloud), and what kind of hardware they need, even if it's out of my price range. |