| ▲ | diggan 9 days ago |
| > LMStudio/llama.cpp Even though LM Studio uses llama.cpp as a runtime, the performance differs between them. With LM Studio 0.3.22 Build 2 with CUDA Llama.cpp (Linux) v1.45.0 runtime I get ~86 tok/s on a RTX Pro 6000, while with llama.cpp compiled from 1d72c841888 (Aug 7 10:53:21 2025) I get ~180 tok/s, almost 100 more per second, both running lmstudio-community/gpt-oss-120b-GGUF. |
|
| ▲ | esafak 8 days ago | parent [-] |
| Is it always like this or does it depend on the model? |
| |
| ▲ | diggan 8 days ago | parent [-] | | Depends on the model. Each runner needs to implement support when there are new architectures, and they all seemingly focuses on different things. As far as I've gathered so far, vLLM focuses on inference speed, SGLang on parallizing across multiple GPUs, Ollama on being as fast out the door with their implementation as possible, sometimes cutting corners, llama.cpp sits somewhere in-between Ollama and vLLM. Then LM Studio seems to lag slightly behind with their llama.cpp usage, so I'm guessing that's the difference between LM Studio and building llama.cpp from source today. |
|