| ▲ | r_lee 2 hours ago | |
yeah, but I mean more like the old setups where you'd just load a model on a 4090 or something, even with MoE it's a lot more complex and takes more VRAM, right? like it just seems not justifiable for most hobbyists but maybe I'm just slightly out of the loop | ||
| ▲ | zozbot234 an hour ago | parent [-] | |
With sparse MoE it's worth running the experts in system RAM since that allows you to transparently use mmap and inactive experts can stay on disk. Of course that's also a slowdown unless you have enough RAM for the full set, but it lets you run much larger models on smaller systems. | ||