| ▲ | abhikul0 8 hours ago | |||||||
Are you running it locally with llama.cpp? If so, is it working without any tweaking of the chat template? The tool calls fail for me when using the default chat template, however it seems to work a whole lot better with this: https://huggingface.co/Qwen/Qwen3.5-35B-A3B/discussions/9#69... | ||||||||
| ▲ | sosodev 18 minutes ago | parent | next [-] | |||||||
I’ve been running it via llama-server with no issues. Running the latest Bartowski 6-bit quant | ||||||||
| ▲ | arcanemachiner 8 hours ago | parent | prev [-] | |||||||
Have you tried the '--jinja' flag in llama-server? | ||||||||
| ||||||||