| ▲ | brabel 3 hours ago | |||||||
Ollama is a bit easier to use, you’re right. But the point of the article is the way they just disregarded the license of llama.cpp, moved away from open source while still claiming to be open source and pivoted to cloud offerings when the whole point was to run local models all while without contributing anything back to the big open source projects it owns its existence to. Maybe you don’t care about performance (weird given performance is the main blocker for local LLMs) but you should care about the ethics of companies making the product you use? And anyway this thread has lots of alternatives that are even easier to use and don’t shit on the open source community making things happen. | ||||||||
| ▲ | endymion-light an hour ago | parent [-] | |||||||
I'm making more of a pragmatic point. While ethics of companies are important, i'm still using OpenAI, Anthropic, Microsoft, Apple etc, so I definitely accept a trade-off between morality and ease-of-use. Currently i've found Ollama to have the best intuitive experience for trying new models. Once i've tried those models and decide on something to use for a project, I can deploy them, and not need to use a UI again. I'll be trying out the other options in this thread, but my point is that ease of use is going to triumph over the other points the original post made, and some of the alternatives mentioned in the original post miss why Ollama is so popular. | ||||||||
| ||||||||