| ▲ | lxgr 10 hours ago |
| Does this work with (tool use capable) models hosted locally? |
|
| ▲ | parthsareen 10 hours ago | parent | next [-] |
| Hi - author of the post. Yes it does! The "build a search agent" example can be used with a local model. I'd recommend trying qwen3 or gpt-oss |
| |
| ▲ | lxgr 10 hours ago | parent [-] | | Very cool, thank you! Looking forward to try it with a few shell scripts (via the llm-ollama extension for the amazing Python ‘llm’) or Raycast (the lack of web search support for Ollama has been one of my biggest reasons for preferring cloud-hosted models). | | |
| ▲ | parthsareen 10 hours ago | parent [-] | | Since we shipped web search with gpt-oss in the Ollama app I've personally been using that a lot more especially for research heavy tasks that I can shoot off. Plus with a 5090 or the new macs it's super fast. |
|
|
|
| ▲ | yggdrasil_ai 10 hours ago | parent | prev [-] |
| I don't think ollama officially supports any proper tool use via api. |
| |
| ▲ | lxgr 10 hours ago | parent [-] | | Huh, I was pretty sure I used it before, but maybe I’m confusing it with some other python-llm backend. Is https://ollama.com/blog/tool-support not it? | | |
| ▲ | all2 10 hours ago | parent [-] | | It depends on the model. Deepseek-R1 says it supports tool use, but the system prompt template does not have the tool-include callouts. YMMV |
|
|