| ▲ | nijave 11 hours ago |
| Anyone compare to ollama? I had good success with latest ollama with ROCm 7.4 on 9070 XT a few days ago |
|
| ▲ | RealFloridaMan 9 hours ago | parent | next [-] |
| It is optimized for compatibility across different APIs as well as has specific hardware builds for AMD GPUs and NPUs. It’s run by AMD. Under the hood they are both running llama.cpp, but this has specific builds for different GPUs. Not sure if the 9070 is one, I am running it on a 370 and 395 APU. |
|
| ▲ | martin-adams 8 hours ago | parent | prev | next [-] |
| I just compared this on my Mac book M1 Max 64GB RAM with the following: Model: qwen3.59b
Prompt: "Hey, tell me a story about going to space" Ollama completed in about 1:44
Lemonade completed in about 1:14 So it seems faster in this very limited test. |
|
| ▲ | nezhar 6 hours ago | parent | prev | next [-] |
| I'm also curious about this one, also I want to compare this to vLLM. |
|
| ▲ | iugtmkbdfil834 11 hours ago | parent | prev | next [-] |
| Seconded. Currently on ollama for local inference, but I am curious how it compares. |
| |
| ▲ | LumielGR 9 hours ago | parent [-] | | Lemonade is using llama.cpp for text and vision with a nightly ROCm build. It can also load and serve multiple LLMs at the same time. It can also create images, or use whisper.cpp, or use TTS models, or use NPU (e.g Strix Halo amdxdna2), and more! |
|
|
| ▲ | metalliqaz 9 hours ago | parent | prev [-] |
| better than Vulkan? |
| |
| ▲ | cpburns2009 9 hours ago | parent | next [-] | | In my experience using llama.cpp (which ollama uses internally) on a Strix Halo, whether ROCm or Vulkan performs better really depends on the model and it's usually within 10%. I have access to an RX 7900 XT I should compare to though. | | |
| ▲ | metalliqaz 8 hours ago | parent [-] | | Perhaps I should just google it, but I'm under the impression that ollama uses llama.cpp internally, not the other way around. Thanks for that data point I should experiment with ROCm | | |
| ▲ | cpburns2009 7 hours ago | parent | next [-] | | I meant ollama uses llama.cpp internally. Sorry for the confusion. | |
| ▲ | naasking 6 hours ago | parent | prev [-] | | From what I understand, ROCm is a lot buggier and has some performance regressions on a lot of GPUs in the 7.x series. Vulkan performance for LLMs is apparently not far behind ROCm and is far more stable and predictable at this time. |
|
| |
| ▲ | 0x457 6 hours ago | parent | prev | next [-] | | For me Vulkan performs better on integrated cards, but ROCm (MIGraphX) on 7900 XTX. | |
| ▲ | hrmtst93837 7 hours ago | parent | prev [-] | | Wrong layer. Vulkan is a graphics and compute API, while Lemonade is an LLM server, so comparing them makes about as much sense as comparing sockets to nginx. If your goal is to run local models without writing half the stack yourself, compare Lemonade to Ollama or vLLM. | | |
| ▲ | metalliqaz 7 hours ago | parent [-] | | I was talking about ROCm vs Vulkan. On AMD GPUs, Vulkan has been commonly recognized as the faster API for some time. Both have been slower than CUDA due to most of the hosting projects focusing entirely on Nvidia. Parent post seemed to indicate that newer ROCm releases are better. | | |
|
|