Any plans to support local models through llama.cpp or similar?
100% yes. favorites?
I daily drive llama.cpp so that please.
which local models? (e.g. qwen, llama, mistral?)