▲ | arendtio 7 hours ago | |||||||
I tried building and using llama.cpp multiple times, and after a while, I got so frustrated with the frequently broken build process that I switched to ollama with the following script:
And it just works :-) | ||||||||
▲ | boneitis 6 hours ago | parent [-] | |||||||
this was pretty much spot-on to my experience and track. the ridicule of people choosing to use ollama over llamacpp is so tired. i had already burned an evening trying to debug and fix issues getting nowhere fast, until i pulled ollama and had it working with just two commands. it was a shock. (granted, there is/was a crippling performance problem with sky/kabylake chips but mitigated if you had any kind of mid-tier GPU and tweaked a couple settings) anyone who tries to contribute to the general knowledge base of deploying llamacpp (like TFA) is doing heaven's work. | ||||||||
|