Remix.run Logo
simonw 5 hours ago

This story talks about MLX and Ollama but doesn't mention LM Studio - https://lmstudio.ai/

LM Studio can run both MLX and GGUF models but does so from an Ollama style (but more full-featured) macOS GUI. They also have a very actively maintained model catalog at https://lmstudio.ai/models

ZeroCool2u 5 hours ago | parent | next [-]

LMStudio is so much better than Ollama it's silly it's not more popular.

thehamkercat 5 hours ago | parent [-]

LMStudio is not open source though, ollama is

but people should use llama.cpp instead

smcleod 4 hours ago | parent | next [-]

I suspect Ollama is at least partly moving away open source as they look to raise capitol, when they released their replacement desktop app they did so as closed source. You're absolutely right that people should be using llama.cpp - not only is it truly open source but it's significantly faster, has better model support, many more features, better maintained and the development community is far more active.

nateb2022 3 hours ago | parent | prev | next [-]

> but people should use llama.cpp instead

MLX is a lot more performant than Ollama and llama.cpp on Apple Silicon, comparing both peak memory usage + tok/s output.

edit: LM Studio benefits from MLX optimizations when running MLX compatible models.

behnamoh 4 hours ago | parent | prev [-]

> LMStudio is not open source though, ollama is

and why should that affect usage? it's not like ollama users fork the repo before installing it.

thehamkercat 4 hours ago | parent [-]

It was worth mentioning.

midius 5 hours ago | parent | prev | next [-]

Makes me think it's a sponsored post.

Cadwhisker 5 hours ago | parent [-]

LMStudio? No, it's the easiest way to run am LLM locally that I've seen to the point where I've stopped looking at other alternatives.

It's cross-platform (Win/Mac/Linux), detects the most appropriate GPU in your system and tells you whether the model you want to download will run within it's RAM footprint.

It lets you set up a local server that you can access through API calls as if you were remotely connected to an online service.

vunderba 4 hours ago | parent [-]

FWIW, Ollama already does most of this:

- Cross-platform

- Sets up a local API server

The tradeoff is a somewhat higher learning curve, since you need to manually browse the model library and choose the model/quantization that best fit your workflow and hardware. OTOH, it's also open-source unlike LMStudio which is proprietary.

randallsquared 4 hours ago | parent [-]

I assumed from the name that it only ran llama-derived models, rather than whatever is available at huggingface. Is that not the case?

fenykep 4 hours ago | parent [-]

No, they have quite a broad list of models: https://ollama.com/search

[edit] Oh and apparently you can also directly run some models directly from HuggingFace: https://huggingface.co/docs/hub/ollama

evacchi 3 hours ago | parent | prev | next [-]

ramalama.ai is worth mentioning too

thehamkercat 4 hours ago | parent | prev [-]

I think you should mention that LM Studio isn't open source.

I mean, what's the point of using local models if you can't trust the app itself?

behnamoh 4 hours ago | parent | next [-]

> I mean, what's the point of using local models if you can't trust the app itself?

and you think ollama doesn't do telemetry/etc. just because it's open source?

thehamkercat 4 hours ago | parent [-]

That's why i suggested using llama.cpp in my other comment.

satvikpendem 4 hours ago | parent | prev [-]

Depends what people use them for, not every user of local models is doing so for privacy, some just don't like paying for online models.

thehamkercat 4 hours ago | parent [-]

Most LLM sites are now offering free plans, and they are usually better than what you can run locally, So I think people are running local models for privacy 99% of the time