Remix.run Logo
OMLX – LLM inference, optimized for your Mac(omlx.ai)
2 points by wrxd 2 days ago | 1 comments
threecheese 2 days ago | parent [-]

It would be helpful to benchmark against other providers that sit atop MLX; this page tells me how OMLX does, but not why I should move from another (like LMStudio etc). I get that you have some features that you might only find in vllm, but how do I know that Ollama would be X tps slower? TBH not seeing competitors in a benchmark makes it less a benchmark and more a data sheet.

https://omlx.ai/benchmarks