| ▲ | endymion-light 4 hours ago | ||||||||||||||||
I'm sorry, on a mac, Ollama just works. It lets me use a model and test it quickly. This is like saying stop using google drive, upload everything to s3 instead! When i'm using Ollama - I honeslty don't care about performance, I'm looking to try out a model and then if it seems good, place it onto a most dedicated stack specifically for it. | |||||||||||||||||
| ▲ | brabel 3 hours ago | parent [-] | ||||||||||||||||
Ollama is a bit easier to use, you’re right. But the point of the article is the way they just disregarded the license of llama.cpp, moved away from open source while still claiming to be open source and pivoted to cloud offerings when the whole point was to run local models all while without contributing anything back to the big open source projects it owns its existence to. Maybe you don’t care about performance (weird given performance is the main blocker for local LLMs) but you should care about the ethics of companies making the product you use? And anyway this thread has lots of alternatives that are even easier to use and don’t shit on the open source community making things happen. | |||||||||||||||||
| |||||||||||||||||