▲ | zozbot234 9 days ago | |||||||
I don't think this is correct because AI output can be constrained to a fixed format (such as JSON) during inference. Then MCP is useful because the "tool_calls" section of that fixed JSON output can be restricted to only mention tools that are included in the MCP input, their input parameters might also be constrained etc. Free text input wouldn't give you any of that! | ||||||||
▲ | meander_water 9 days ago | parent | next [-] | |||||||
I think you're mixing up tool calling and structured outputs. You can have both of those or either without MCP. MCP just standardizes the tool calling and only makes sense if you want to share your tools across the org. I wouldn't use it for simple functions like getting current date for e.g. | ||||||||
| ||||||||
▲ | medbrane 8 days ago | parent | prev [-] | |||||||
Indeed, the article would have been correct one year ago. Now, modern LLM APIs do require the tools to be described outside the prompt [1]. This negates the whole article, although one bit where he's right is that it does not matter if those tools are MCP tools or local, the call to LLM looks the same. [1] https://platform.openai.com/docs/guides/function-calling?api... |