▲ | buyucu 7 days ago | |||||||||||||
This kind of gaslighting is exactly why I stopped using Ollama. GGML library is llama.cpp. They are one and the same. Ollama made sense when llama.cpp was hard to use. Ollama does not have value preposition anymore. | ||||||||||||||
▲ | mchiang 7 days ago | parent | next [-] | |||||||||||||
It’s a different repo. https://github.com/ggml-org/ggml The models are implemented by Ollama https://github.com/ollama/ollama/tree/main/model/models I can say as a fact, for the gpt-oss model, we also implemented our own MXFP4 kernel. Benchmarked against the reference implementations to make sure Ollama is on par. We implemented harmony and tested it. This should significantly impact tool calling capability. Im not sure if im feeding here. We really love what we do, and I hope it shows in our product, in Ollama’s design and in our voice to our community. You don’t have to like Ollama. That’s subjective to your taste. As a maintainer, I certainly hope to have you as a user one day. If we don’t meet your needs and you want to use an alternative project, that’s totally cool too. It’s the power of having a choice. | ||||||||||||||
| ||||||||||||||
▲ | scosman 6 days ago | parent | prev [-] | |||||||||||||
> GGML library is llama.cpp. They are one and the same. Nope… |