▲ | cristoperb 6 hours ago | |
Ollama is a ycombinator startup, so I guess they have to find some roi at some point.[1] I personally found Ollama to be an easy way to try out local LLMs and appreciate them for that (and I still use it to download small models on my laptop and phone (via termux)), but I've long switched to llama.cpp + llama-swap[2] on my dev desktop. I download whatever ggufs I want from hugging face and just do `git pull` and `cmake --build build --config Release` from my llama.cpp directory whenever I want to update. 1: https://www.ycombinator.com/companies/ollama 2: https://github.com/mostlygeek/llama-swap |