▲ | otabdeveloper4 7 days ago | ||||||||||||||||
> Ollama lets you just install it, just install models, and go. So does the original llama.cpp. And you won't have to deal with mislabeled models and insane defaults out of the box. | |||||||||||||||||
▲ | lxgr 6 days ago | parent [-] | ||||||||||||||||
Can it easily run as a server process in the background? To me, not having to load the LLM into memory for every single interaction is a big win of Ollama. | |||||||||||||||||
|