| ▲ | bravura a day ago | |
Isn't the challenge that introspecting graphql will lead to either a) a very long set of definitions consuming many tokens or b) many calls to drill into the introspection? | ||
| ▲ | peacebeard a day ago | parent | next [-] | |
In my experience, this was the limitation we ran into with this approach. If you have a large API this will blow up your context. I have had the best luck with hand-crafted tools that pre-digest your API so you don't have to waste tokens or deal with context rot bugs. | ||
| ▲ | _pdp_ 15 hours ago | parent | prev [-] | |
Well either that or stuff the tool usage examples into the prompt for every single request. If you have only 2-3 tools GraphQL is certainly not necessary - but it wont blow up the context either. If you have 50+ tools, I don't see any other way to be honest, unless you create your own tool discovery solution - which is what GraphQL does really well with the caveat that whatever you decide to do is certainly not natural to these LLMs. Keep in mind that all LLMs are trained on many GraphQL examples because the technology has been in existence since 2015. While anything custom might just work it is certainly not part of the model training set unless you fine-tune. So yes, if I need to decide on formats I will go for GraphQL, SQL and Markdown. | ||