▲ | vlaaad 3 days ago | |||||||||||||||||||||||||||||||
Sure, but the need for accuracy will only increase; there is a difference between suggesting an LLM to put a schema in its context before calling the tool vs forcing the LLM to use a structured output returned from a tool dynamically. We already have 100% reliable structured outputs if we are making chatbots with LLM integrations directly; I don't want to lose this. | ||||||||||||||||||||||||||||||||
▲ | WithinReason 3 days ago | parent [-] | |||||||||||||||||||||||||||||||
And LLMs will get more accurate. What happens when the LLM uses the wrong parameters? If it's an immediate error then it will just try again, no need for protocol changes, just better LLMs. | ||||||||||||||||||||||||||||||||
|