| ▲ | jsmith99 6 hours ago | |||||||||||||
There's nothing specific to Gemini and Antigravity here. This is an issue for all agent coding tools with cli access. Personally I'm hesitant to allow mine (I use Cline personally) access to a web search MCP and I tend to give it only relatively trustworthy URLs. | ||||||||||||||
| ▲ | ArcHound 6 hours ago | parent | next [-] | |||||||||||||
For me the story is that Antigravity tried to prevent this with a domain whitelist and file restrictions. They forgot about a service which enables arbitrary redirects, so the attackers used it. And LLM itself used the system shell to pro-actively bypass the file protection. | ||||||||||||||
| ▲ | dabockster 4 hours ago | parent | prev | next [-] | |||||||||||||
> Personally I'm hesitant to allow mine (I use Cline personally) access to a web search MCP and I tend to give it only relatively trustworthy URLs. Web search MCPs are generally fine. Whatever is facilitating tool use (whatever program is controlling both the AI model and MCP tool) is the real attack vector. | ||||||||||||||
| ▲ | connor4312 3 hours ago | parent | prev | next [-] | |||||||||||||
Copilot will prompt you before accessing untrusted URLs. It seems a crux of the vulnerability that the user didn't need to consent before hitting a url that was effectively an open redirect. | ||||||||||||||
| ||||||||||||||
| ▲ | informal007 3 hours ago | parent | prev | next [-] | |||||||||||||
Speaking of filtering trustworthy URLs, Google is the best option to do that because he has more historical data in search business. Hope google can do something for preventing prompt injection for AI community. | ||||||||||||||
| ||||||||||||||
| ▲ | IshKebab 5 hours ago | parent | prev [-] | |||||||||||||
I do think they deserve some of the blame for encouraging you to allow all commands automatically by default. | ||||||||||||||
| ||||||||||||||