Remix.run Logo
anakaine 5 hours ago

Llama.cpp now has a gui installed by default. It previously lacked this. Times have changed.

nikodunk 5 hours ago | parent | next [-]

Having read above article, I just gave llama.cpp a shot. It is as easy as the author says now, though definitely not documented quite as well. My quickstart:

brew install llama.cpp

llama-server -hf ggml-org/gemma-4-E4B-it-GGUF --port 8000

Go to localhost:8000 for the Web UI. On Linux it accelerates correctly on my AMD GPU, which Ollama failed to do, though of course everyone's mileage seems to vary on this.

teekert 4 hours ago | parent [-]

Was hoping it was so easy :) But I probably need to look into it some more.

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma4' llama_model_load_from_file_impl: failed to load model

Edit: @below, I used `nix-shell -p llama-cpp` so not brew related. Could indeed be an older version indeed! I'll check.

adrian_b an hour ago | parent | next [-]

As it has been discussed in a few recent threads on HN, whenever a new model is released, running it successfully may need changes in the inference backends, such as llama.cpp.

There are 2 main reasons. One is the tokenizer, where new tokenizer definitions may be mishandled by the older tokenizer parsers.

The second reason is that each model may implement differently the tool invocations, e.g. by using different delimiter tokens and different text layouts for describing the parameters of a tool invocation.

Therefore running the Gemma-4 models encountered various problems during the first days after their release, especially for the dense 31B model.

Solving these problems required both a new version of llama.cpp (also for other inference backends) and updates in the model chat template and tokenizer configuration files.

So anyone who wants to use Gemma-4 should update to the latest version of llama.cpp and to the latest models from Huggingface, because the latest updates have been a couple of days ago.

roosgit 4 hours ago | parent | prev [-]

I just hit that error a few minutes ago. I build my llama.cpp from source because I use CUDA on Linux. So I made the mistake of trying to run Gemma4 on an older version I had and I got the same error. It’s possible brew installs an older version which doens’t support Gemma4 yet.

teekert 3 hours ago | parent | next [-]

Ah it was indeed just that!

I'm now on:

$ llama --version version: 8770 (82764d8) built with GNU 15.2.0 for Linux x86_64

(From Nix unstable)

And this works as advertised, nice chat interface, but no openai API I guess, so no opencode...

homarp 3 hours ago | parent [-]

check on same port, there is an OpenAI API https://github.com/ggml-org/llama.cpp/tree/master/tools/serv...

teekert 3 hours ago | parent [-]

Good stuff, thanx!

zozbot234 4 hours ago | parent | prev [-]

And that's exactly why llama.cpp is not usable by casual users. They follow the "move fast and break things" model. With ollama, you just have to make sure you're getting/building the latest version.

Eisenstein 2 hours ago | parent [-]

Its not possible to run the latest model architectures without 'moving fast'. The only thing broken here is that they are trying to use an old version with a new model.

cyanydeez 2 hours ago | parent [-]

and Ollama suffered the same fate when wanting to try new models

OtherShrezzing 5 hours ago | parent | prev | next [-]

While that might be true, for as long as its name is “.cpp”, people are going to think it’s a C++ library and avoid it.

eterm 5 hours ago | parent | next [-]

This is the first I'm learning that it isn't just a C++ library.

In fact the first line of the wikipedia article is:

> llama.cpp is an open source software library

4 hours ago | parent [-]
[deleted]
RobotToaster 5 hours ago | parent | prev | next [-]

It would make sense to just make the GUI a separate project, they could call it llama.gui.

homarp 3 hours ago | parent [-]

it is called llama-barn https://github.com/ggml-org/LlamaBarn

adrian_b an hour ago | parent [-]

LlamaBarn is the MacOS app, not the HTTP API server, which is "llama-server".

On non-Apple PCs, "llama-server" is what you use, and you can connect to it either with a browser or with an application compatible with the OpenAI API.

Perhaps using "llama-server" as the name of the project would have been less confusing for newbies than "llama.cpp".

I confess that when I first heard about "llama.cpp" I also thought that it is just a library and that I have to write my own program in order to implement a complete LLM inference backend.

figassis 5 hours ago | parent | prev [-]

This is correct, and I avoided it for this reason, did not have the bandwidth to get into any cpp rabbit hole so just used whatever seemed to abstract it away.

mijoharas 5 hours ago | parent | prev [-]

Frankly I think the cli UX and documentation is still much better for ollama.

It makes a bunch of decisions for you so you don't have to think much to get a model up and running.