▲ | dbreunig 8 days ago | |
Came here to say this: people present MCP’s verbosity as all the context the LLM needs. But almost always, this isn’t the case. I wrote recently, “ Connecting your model to random MCPs and then giving it a task is like giving someone a drill and teaching them how it works, then asking them to fix your sink. Is the drill relevant in this scenario? If it’s not, why was it given to me? It’s a classic case of context confusion.” https://www.dbreunig.com/2025/07/30/how-kimi-was-post-traine... |