| ▲ | avaer 5 hours ago |
| In a nutshell: Google wants your websites to be more easily used by the agents they are putting in the browser and other products. They own the user layer and models, and get to decide if your product will be used. Think search monopoly, except your site doesn't even exist as far as users are concerned, it's only used via an agent, and only if Google allows. The work of implementing this is on you. Google is building the hooks into the browser for you to do it; that's WebMCP. It's all opaque; any oopsies/dark patterns will be blamed on the AI. The profits (and future ad revenue charged for sites to show up on the LLM's radar) will be claimed by Google. The other AI companies are on board with this plan. Any questions? |
|
| ▲ | moregrist 4 hours ago | parent | next [-] |
| Knowing Google, there’s a good chance it will turn out like AMP [0]: concerning, but only spotty adoption, and ultimately kind of abandoned/irrelevant. It’s the Google way. [0] https://en.wikipedia.org/wiki/Accelerated_Mobile_Pages |
| |
| ▲ | verandaguy 2 hours ago | parent | next [-] | | > but only spotty adoption
While I'm glad AMP never got truly widespread adoption, it did get adopted in places that mattered -- notably, major news sites.The amount of times I've had to translate an AMP link that I found online before sending it onwards to friends in the hopes of reducing the tracking impact has been huge over the years. Now there are extensions that'll do it, but that hasn't always been the case, and these aren't foolproof either. I do hope this MCP push fizzles, but I worry that Google could just double down and just expose users to less of the web (indirectly) by still only showing results from MCP-enabled pages. It'd be like burning the Library of Alexandria, but at this point I wouldn't put the tech giants above that. | |
| ▲ | notnullorvoid 3 hours ago | parent | prev | next [-] | | Hopefully that's what happens, but it seems like compared to AMP there is more of a joint standardisation effort this time which worries me. | |
| ▲ | candiddevmike 2 hours ago | parent | prev | next [-] | | AMP lives on, mostly as AMP for Email and used by things like Google Workspace for performing actions within an email body (allow listed javascript basically). | |
| ▲ | DaiPlusPlus 3 hours ago | parent | prev [-] | | > It’s the Google way. Don't forget the all-important last step: abruptly killing the product - no matter how popular or praiseworthy it is (or heck: even profitable!) if unnamed Leadership figures say so; vide: killedbygoogle.com |
|
|
| ▲ | oefrha 5 hours ago | parent | prev | next [-] |
| The irony is Google properties are more locked down than ever. When I use a commercial VPN I get ReCAPTCHA’ed half of the time doing every single Google search; and can’t use YouTube in Incognito sometimes, “Sign in to confirm you’re not a bot”. |
| |
| ▲ | verandaguy 2 hours ago | parent | next [-] | | There's also the newer push against what they're calling "model distillation," where their models get prompted in some specific ways to try and extract the behaviour, which, coming from a limited background in machine learning broadly but especially the stuff that's happened since transformers came onto the scene, doesn't seem like something that could be productively done at any useful scale. | | |
| ▲ | nl an hour ago | parent [-] | | Model distillation is very useful! Put it like this: Reinforcement Learning from Human Feedback (RLHF) is useful with hundreds of examples, and LLM distillation is basically the same thing. |
| |
| ▲ | meibo 5 hours ago | parent | prev [-] | | That's by design, their own agents running on their hardware in their network will pass every recaptcha on every customer site |
|
|
| ▲ | the_arun 4 hours ago | parent | prev | next [-] |
| What about Authentication? Should the users to be on Google SSO to use their WebMCP? |
| |
| ▲ | the_arun 4 hours ago | parent [-] | | Here is the answer from Gemini: > Google's Web Model Context Protocol (WebMCP) handles authentication by inheriting the user's existing browser session and security context. This means that an AI agent using WebMCP operates within the same authentication boundaries (session cookies, SSO, etc.) that apply to a human user, without requiring a separate authentication layer for the agent itself. | | |
| ▲ | misnome 4 hours ago | parent [-] | | Here’s what Gemini says about copy-pasting AI answers: > Avoid "lazy" posting—copying a prompt result and pasting it without any context. If the user wanted a raw AI answer, they likely would have gone to the AI themselves. |
|
|
|
| ▲ | solaire_oa 5 hours ago | parent | prev | next [-] |
| We should definitely feel trepidation at the prospects of any LLM guided browser, in addition to WebMCP (e.g. Claude for Chrome enters the same opaque LLM-controlled/deferred decision process, OpenClaw etc). Just one example: Prompting the browser to "register example.com" means that Google/Anthropic gets to hustle registrars for SEO-style priority. Using countermeasures like captcha locks you out of the LLM market. Google's incentive to allow you to shop around via traditional web search is decreased since traditional ads won't be as lucrative (businesses will catch on that blanket targeted ads aren't as effective as a "referral" that directs an LLM to sign-up/purchase/exchange something directly)... expect web search quality to decline, perhaps intentionally. The only way to combat this, as far as I can conceptualize, is with open models, which are not yet as good as private ones, in no small part due to the extraordinary investment subsidization. We can hope for the bubble to pop, but plan for a deader Internet. Meanwhile, trust online, at large, begins to evaporate as nobody can tell what is an LLM vs a human-conducted browser. The Internet at large is entering some very dark waters. |
|
| ▲ | morkalork 5 hours ago | parent | prev | next [-] |
| Oh ho, this is the succinct and correct evaluation. Buckle up y'all, you're gonna be taken for a ride. |
|
| ▲ | socalgal2 5 hours ago | parent | prev [-] |
| The Google hate virus is thick here. It seems uncontroversial that users will likely want to use AI to find info for them and do things for them. So either Google provides users with what they want or they go out of business to some other company that provides what users want. https://www.perplexity.ai/comet https://chatgpt.com/atlas/ https://arc.net/max That is not in any way to suggest companies are ok to do bad things. I don't see anything bad here. I just see the inevitable. People are going to want to ask some AI for whatever they used to get from the internet. Many are already doing this. Who ever enables that for users best will get the users. |
| |
| ▲ | ceejayoz 3 hours ago | parent | next [-] | | > Who ever enables that for users best will get the users. And if it's anything like Uber, that'll be when the enshittification really kicks into gear. | |
| ▲ | maximinus_thrax 4 hours ago | parent | prev [-] | | > It seems uncontroversial that users will likely want to use AI to find info for them and do things for them Lots of weasel words in there. You're doing a lot of work with "seems", "uncontroversial" and "likely". Power users and tech professionals probably want this or their bosses really want this and they fall in line. But a large portion of the 'normal' users still struggle with basic search, distrust AI or just don't trust to delegating tasks to opaque systems they can't inspect. "Users" is not a monolith. | | |
| ▲ | socalgal2 2 hours ago | parent [-] | | Is the opposite. Only HNers distrust AI. The "normies" love it and are far less skeptical. Few of them recognize when it's messing up. |
|
|