| ▲ | matja 2 days ago | |
I'm running Gemma 4 with the llama.cpp web UI. https://unsloth.ai/docs/models/gemma-4 > Gemma 4 GGUFs > "Use this model" > llama.cpp > llama-server -hf unsloth/gemma-4-31B-it-GGUF:Q8_0 If you already have llama.cpp you might need to update it to support Gemma 4. | ||
| ▲ | 2 days ago | parent [-] | |
| [deleted] | ||