Remix.run Logo
xiconfjs 11 hours ago

Ollama on MacOS is a one-click solution with stable obe-click updates. Happy so far. But the mlx support was the only missing piece for me.

yard2010 9 hours ago | parent [-]

Can you please write about your hardware?

xiconfjs 2 hours ago | parent [-]

* macOS 26.x on MacBookPro M1 Max 32GB * Ollama on macOS, cursor to play around * Open WebUI [1] on my Homeserver via API to Ollama (also for remote „A.I.“ access) * running gpt-oss:20b, qwen3.5:9b with ease, qwen3.5:27b for more complex tasks

[1] https://github.com/open-webui/open-webui

brcmthrowaway an hour ago | parent [-]

Seems complicated. Switch to LMStudio

xiconfjs 23 minutes ago | parent [-]

I tried man times but at least with its API active, LMStudio has some kind of memory leaks which will slow down the whole system (after ~1-2 days of uptime) even after unloading the model and stopping LMStudio up to a point where even playing a 1080p video results in frame drops. No such issues with Ollama.