▲ | boneitis 5 hours ago | |
this was pretty much spot-on to my experience and track. the ridicule of people choosing to use ollama over llamacpp is so tired. i had already burned an evening trying to debug and fix issues getting nowhere fast, until i pulled ollama and had it working with just two commands. it was a shock. (granted, there is/was a crippling performance problem with sky/kabylake chips but mitigated if you had any kind of mid-tier GPU and tweaked a couple settings) anyone who tries to contribute to the general knowledge base of deploying llamacpp (like TFA) is doing heaven's work. | ||
▲ | SteelPh0enix 2 hours ago | parent [-] | |
I have spent unreasonable amounts of time building llama.cpp for my hardware setup (AMD GPU) on both Windows and Linux. That was one of the main reasons of writing that blog post for me. Lmao. |