▲ | aabhay 13 hours ago | |
One thing about MCP that some people forget is that the models are post trained on MCP-based rollouts. I think people imagine that MCP was something people discovered about models but it’s deeper than that — models are explicitly trained to be able to interpret various and unseen kinds of MCP system prompts. The exact same is true of these Claude Skills. Technically this is “just a system prompt and some tools”, but it’s actually about LLM labs intentionally encoding specific frameworks of action into the models. | ||
▲ | simonw 12 hours ago | parent [-] | |
A source I trust told me that Anthropic's models haven't yet been deliberately trained to know about skills. |