| ▲ | adrian_b 2 hours ago | |
Running llama-server (it belongs to llama.cpp) starts a HTTP server on a specified port. You can connect to that port with any browser, for chat. Or you can connect to that port with any application that supports the OpenAI API, e.g. a coding assistant harness. | ||