| ▲ | roosgit 4 hours ago | ||||||||||||||||
I just hit that error a few minutes ago. I build my llama.cpp from source because I use CUDA on Linux. So I made the mistake of trying to run Gemma4 on an older version I had and I got the same error. It’s possible brew installs an older version which doens’t support Gemma4 yet. | |||||||||||||||||
| ▲ | teekert 3 hours ago | parent | next [-] | ||||||||||||||||
Ah it was indeed just that! I'm now on: $ llama --version version: 8770 (82764d8) built with GNU 15.2.0 for Linux x86_64 (From Nix unstable) And this works as advertised, nice chat interface, but no openai API I guess, so no opencode... | |||||||||||||||||
| |||||||||||||||||
| ▲ | zozbot234 4 hours ago | parent | prev [-] | ||||||||||||||||
And that's exactly why llama.cpp is not usable by casual users. They follow the "move fast and break things" model. With ollama, you just have to make sure you're getting/building the latest version. | |||||||||||||||||
| |||||||||||||||||