| ▲ | Flux159 6 hours ago | |||||||
This was announced in early preview a few days ago by Chrome as well: https://developer.chrome.com/blog/webmcp-epp I think that the github repo's README may be more useful: https://github.com/webmachinelearning/webmcp?tab=readme-ov-f... Also, the prior implementations may be useful to look at: https://github.com/MiguelsPizza/WebMCP and https://github.com/jasonjmcghee/WebMCP | ||||||||
| ▲ | politelemon 5 hours ago | parent [-] | |||||||
This GitHub readme was helpful in understanding their motivation, cheers for sharing it. > Integrating agents into it prevents fragmentation of their service and allows them to keep ownership of their interface, branding and connection with their users Looking at the contrived examples given, I just don't see how they're achieving this. In fact it looks like creating MCP specific tools will achieve exactly the opposite. There will immediately be two ways to accomplish a thing and this will result in a drift over time as developers need to take into account two ways of interacting with a component on screen. There should be no difference, but there will be. Having the LLM interpret and understand a page context would be much more in line with assistive technologies. It would require site owners to provide a more useful interface for people in need of assistance. | ||||||||
| ||||||||