▲ | mchiang 7 days ago | ||||||||||||||||||||||
thanks, I'll take that feedback, but I do want to clarify that it's not from llama.cpp/ggml. It's from ggml-org/ggml. I supposed it's all interchangeable though, so thank you for it. | |||||||||||||||||||||||
▲ | kristjansson 6 days ago | parent [-] | ||||||||||||||||||||||
i.e. as of time of writing +/- 1445 lines between the two, on about 175k total lines. a lot of which is the recent MXFP4 stuff.Ollama is great software. It's integral to the broader diffusion of LLMs. You guys should be incredibly proud of it and the impact its had. I understand the current environment rewards bold claims, but the sense I get from some of your communications is "what's the boldest, strongest claim we can make that's still mostly technically true". As a potential user, taking those claims as true until closer evaluation reveals the discrepancy feels pretty bad, and keeps me firmly in the 'potential' camp. Have the confidence in your software and the respect for your users to advertise your system as it is. | |||||||||||||||||||||||
|