| ▲ | aaronsteers 4 hours ago | |
AJ here, from Airbyte. Yes, we've definitely found that some API data models are easier for models to navigate than others. The largest factors of Agent inefficiency we've identified so far are: 1. Many APIs lack robust-enough search, forcing agents to page through hundreds or thousands of paginated responses until they find the record they are looking for (our Context Store addresses this). 2. Many APIs have HUGE response sets. Our MCP helps handle this by letting the agent decide exactly what fields they can return. 3. With our SDK, you can literally build your own MCP on top of any source we support (50+ right now and will grow). This is super powerful, and allows you to build more ergonomic MCP servers and tools - even if the models themselves are not intuitive or easy for the LLM to leverage directly. Combining all three of these together, we see the vast majority of challenges can be addressed via a strong system prompt for guidance. Fine tuning could get you further but anyway, you'd still want your fine tuned model to build on this same foundation, since the efficiences will transfer across use cases and models. @ecares - Does this answer your question? What do you think? | ||
| ▲ | woeirua 2 hours ago | parent [-] | |
Your point about search being a bottleneck is spot on. IMO, search APIs should return guidance to agents to help them winnow down the results faster. For example, if your query returns 1000 results, then it should tell the agent, "too many results, we recommend you filter on column X because of Y to improve your search. Here are the possible values in column X: ..." | ||