| ▲ | p1esk 4 days ago | |
5090 has 32GB of RAM. Not sure if that’s enough to fit this model. | ||
| ▲ | IceWreck 4 days ago | parent | next [-] | |
LlamaCPP supports offloading some experts in a MoE model to CPU. The results are very good and even weaker GPUs can run larger models at reasonable speeds. n-cpu-moe in https://github.com/ggml-org/llama.cpp/blob/master/tools/serv... | ||
| ▲ | svnt 4 days ago | parent | prev [-] | |
It should fit enough of the layers to make it reasonably performant. | ||