Remix.run Logo
api 7 days ago

> Repackaging existing software while literally adding no useful functionality was always their gig.

Developers continue to be blind to usability and UI/UX. Ollama lets you just install it, just install models, and go. The only other thing really like that is LM-Studio.

It's not surprising that the people behind it are Docker people. Yes you can do everything Docker does with Linux kernel and shell commands, but do you want to?

Making software usable is often many orders of magnitude more work than making software work.

otabdeveloper4 7 days ago | parent [-]

> Ollama lets you just install it, just install models, and go.

So does the original llama.cpp. And you won't have to deal with mislabeled models and insane defaults out of the box.

lxgr 6 days ago | parent [-]

Can it easily run as a server process in the background? To me, not having to load the LLM into memory for every single interaction is a big win of Ollama.

otabdeveloper4 6 days ago | parent [-]

Yes, of course it can.

lxgr 6 days ago | parent [-]

I wouldn't consider that a given at all, but apparently there's indeed `llama-server` which looks promising!

Then the only thing that's missing seems to be a canonical way for clients to instantiate that, ideally in some OS-native way (systemd, launchcd etc.), and a canonical port that they can connect to.