| ▲ | denismi 5 hours ago | |
Hmm..
Maybe some day. | ||
| ▲ | mongrelion 4 hours ago | parent | next [-] | |
llama.cpp moves too quickly to be added as a stable package. Instead, you can get it directly from AUR: https://aur.archlinux.org/packages?O=0&K=llama.cpp There are packages for Vulkan, ROCm and CUDA. They all work. | ||
| ▲ | FlyingSnake 4 hours ago | parent | prev [-] | |
yay -S llama.cpp I just installed llama.cpp on CachyOS after reading this article. It’s much faster and better than Ollama. | ||