▲ | otabdeveloper4 7 days ago | ||||||||||||||||||||||||||||||||||
[flagged] | |||||||||||||||||||||||||||||||||||
▲ | dang 7 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something." | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
▲ | api 7 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
> Repackaging existing software while literally adding no useful functionality was always their gig. Developers continue to be blind to usability and UI/UX. Ollama lets you just install it, just install models, and go. The only other thing really like that is LM-Studio. It's not surprising that the people behind it are Docker people. Yes you can do everything Docker does with Linux kernel and shell commands, but do you want to? Making software usable is often many orders of magnitude more work than making software work. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
▲ | llmtosser 7 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
This is not true. No inference engine does all of: - Model switching - Unload after idle - Dynamic layer offload to CPU to avoid OOM | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
▲ | mchiang 7 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
sorry that you feel the way you feel. :( I'm not sure which package we use that is triggering this. My guess is llama.cpp based on what I see on social? Ollama has long shifted to using our own engine. We do use llama.cpp for legacy and backwards compatibility. I want to be clear it's not a knock on the llama.cpp project either. There are certain features we want to build into Ollama, and we want to be opinionated on the experience we want to build. Have you supported our past gigs before? Why not be more happy and optimistic in seeing everyone build their dreams (success or not). If you go build a project of your dreams, I'd be supportive of it too. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
▲ | dangoodmanUT 7 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
Yes everyone should just write cpp to call local LLMs obviously | |||||||||||||||||||||||||||||||||||
|