Remix.run Logo
buyucu 7 days ago

This kind of gaslighting is exactly why I stopped using Ollama.

GGML library is llama.cpp. They are one and the same.

Ollama made sense when llama.cpp was hard to use. Ollama does not have value preposition anymore.

mchiang 7 days ago | parent | next [-]

It’s a different repo. https://github.com/ggml-org/ggml

The models are implemented by Ollama https://github.com/ollama/ollama/tree/main/model/models

I can say as a fact, for the gpt-oss model, we also implemented our own MXFP4 kernel. Benchmarked against the reference implementations to make sure Ollama is on par. We implemented harmony and tested it. This should significantly impact tool calling capability.

Im not sure if im feeding here. We really love what we do, and I hope it shows in our product, in Ollama’s design and in our voice to our community.

You don’t have to like Ollama. That’s subjective to your taste. As a maintainer, I certainly hope to have you as a user one day. If we don’t meet your needs and you want to use an alternative project, that’s totally cool too. It’s the power of having a choice.

mark_l_watson 6 days ago | parent | next [-]

Hello, thanks for answering questions here.

Is there a schedule for adding additional models to Turbo mode plan, in addition to gpt-oss 20/120b? I wanted to try your $20/month Turbo plan, but I would like to be able to experiment with a few other large models.

buyucu 5 days ago | parent | prev [-]

This is exactly what I mean by gaslighting.

GGML is llama.cpp. It it developed by the same people as llama.cpp and powers everything llama.cpp does. You must know that. The fact that you are ignoring it very dishonest.

scosman 6 days ago | parent | prev [-]

> GGML library is llama.cpp. They are one and the same.

Nope…