Remix.run Logo
redrove 3 hours ago

There is virtually no reason to use Ollama over LM Studio or the myriad of other alternatives.

Ollama is slower and they started out as a shameless llama.cpp ripoff without giving credit and now they “ported” it to Go which means they’re just vibe code translating llama.cpp, bugs included.

faitswulff 2 hours ago | parent | next [-]

Does LM Studio have an equivalent to the ollama launch command? i.e. `ollama launch claude --model qwen3.5:35b-a3b-coding-nvfp4`

DiabloD3 40 minutes ago | parent [-]

I don't think it does, but llama.cpp does, and can load models off HuggingFace directly (so, not limited to ollama's unofficial model mirror like ollama is).

There is no reason to ever use ollama.

ffsm8 26 minutes ago | parent | next [-]

> I don't think it does, but llama.cpp does

I just checked their docs and can't see anything like it.

Did you mistake the command to just download and load the model?

beanjuiceII 17 minutes ago | parent | prev [-]

sure there's a reason...it works fine thats the reason

alifeinbinary 3 hours ago | parent | prev | next [-]

I really like LM Studio when I can use it under Windows but for people like me with Intel Macs + AMD gpu ollama is the only option because it can leverage the gpu using MoltenVK aka Vulkan, unofficially. We're still testing it, hoping to get the Vulkan support in the main branch soon. It works perfectly for single GPUs but some edge cases when using multiple GPUs are unsupported until upstream support from MoltenVK comes through. But yeah, I agree, it wasn't cool to repackage Georgi's work like that.

meltyness 2 hours ago | parent | prev | next [-]

I feel like the READMEs for these 3 large popular packages already illustrate tradeoffs better than hacker news argument

gen6acd60af 2 hours ago | parent | prev | next [-]

LM Studio is closed source.

And didn't Ollama independently ship a vision pipeline for some multimodal models months before llama.cpp supported it?

iLoveOncall 3 hours ago | parent | prev | next [-]

> There is virtually no reason to use Ollama over LM Studio or the myriad of other alternatives.

Hmm, the fact that Ollama is open-source, can run in Docker, etc.?

DiabloD3 27 minutes ago | parent [-]

Ollama is quasi-open source.

In some places in the source code they claim sole ownership of the code, when it is highly derivative of that in llama.cpp (having started its life as a llama.cpp frontend). They keep it the same license, however, MIT.

There is no reason to use Ollama as an alternative to llama.cpp, just use the real thing instead.

lousken 2 hours ago | parent | prev [-]

lm studio is not opensource and you can't use it on the server and connect clients to it?

jedisct1 2 hours ago | parent [-]

LM Studio can absolutely run as as server.

walthamstow an hour ago | parent [-]

IIRC it does so as default too. I have loads of stuff pointing at LM Studio on localhost