| ▲ | ilaksh 9 hours ago | |
Cool but is there a reason they can't just make PRs for vLLM and llama.cpp? Or have their own forks if they take too long to merge? | ||
| ▲ | RealFloridaMan 7 hours ago | parent [-] | |
They use the latest llama.cpp under the hood but built for specific AMD GPU hardware. Lemonade is really just a management plane/proxy. It translates ollama/anthropic APIs to OpenAI format for llama.cpp. It runs different backends for sst/tts and image generation. Lets you manage it all in one place. | ||