| ▲ | LudwigNagasena 3 days ago |
| > there is no way to tell the AI agent “for this argument, look up a JSON schema using this other tool” There is a description field, it seems sufficient for most cases. You can also dynamically change your tools using `listChanged` capability. |
|
| ▲ | vlaaad 3 days ago | parent [-] |
| Sure, but the need for accuracy will only increase; there is a difference between suggesting an LLM to put a schema in its context before calling the tool vs forcing the LLM to use a structured output returned from a tool dynamically. We already have 100% reliable structured outputs if we are making chatbots with LLM integrations directly; I don't want to lose this. |
| |
| ▲ | WithinReason 3 days ago | parent [-] | | And LLMs will get more accurate. What happens when the LLM uses the wrong parameters? If it's an immediate error then it will just try again, no need for protocol changes, just better LLMs. | | |
| ▲ | vlaaad 3 days ago | parent [-] | | The difference between 99% reliability and 100% reliability is huge in this case. | | |
| ▲ | WithinReason 3 days ago | parent [-] | | I misunderstood the problem then, I thought it would take only a few seconds for the LLM to issue the call, see the error, fix the call. | | |
| ▲ | jtbayly 3 days ago | parent | next [-] | | Last time I used Gemini CLI it still couldn’t consistently edit a file. That was just a few weeks ago. In fact, it would go into a loop attempting the same edit, burning through many thousands of tokens and calls in the process, re-reading the file, attempting the same edit, rinse, repeat until I stopped it. I didn’t find it entertaining. | |
| ▲ | wahnfrieden 3 days ago | parent | prev [-] | | Big waste of context |
|
|
|
|