Remix.run Logo
leopoldj 6 days ago

> Ollama has moved off of llama.cpp as a wrapper. We do continue to use the GGML library

Where can I learn more about this? llama.cpp is an inference application built using the ggml library. Does this mean, Ollama now has it's own code for what llama.cpp does?

guipsp 6 days ago | parent [-]

https://github.com/ollama/ollama/tree/main/model/models