Remix.run Logo
mrs6969 7 days ago

Agreed. Ollama itself is kind a wrapper around llamacpp anyway. Feel like the real guy is not included to the process.

Now I am going to go and write a wrapper around llamacpp, that is only open source, truly local.

How can I trust ollama to not to sell my data.

Patrick_Devine 7 days ago | parent | next [-]

Ollama only uses llamacpp for running legacy models. gpt-oss runs entirely in the ollama engine.

You don't need to use Turbo mode; it's just there for people who don't have capable enough GPUs.

rafram 7 days ago | parent | prev [-]

Ollama is not a wrapper around llama.cpp anymore, at least for multimodal models (not sure about others). They have their own engine: https://ollama.com/blog/multimodal-models

iphone_elegance 7 days ago | parent [-]

looks like the backend is ggml, am I missing something? same diff