Remix.run Logo
simonw 4 hours ago

More reports of similar vulnerabilities in Antigravity from Johann Rehberger: https://embracethered.com/blog/posts/2025/security-keeps-goo...

He links to this page on the Google vulnerability reporting program:

https://bughunters.google.com/learn/invalid-reports/google-p...

That page says that exfiltration attacks against the browser agent are "known issues" that are not eligible for reward (they are already working on fixes):

> Antigravity agent has access to files. While it is cautious in accessing sensitive files, there’s no enforcement. In addition, the agent is able to create and render markdown content. Thus, the agent can be influenced to leak data from files on the user's computer in maliciously constructed URLs rendered in Markdown or by other means.

And for code execution:

> Working with untrusted data can affect how the agent behaves. When source code, or any other processed content, contains untrusted input, Antigravity's agent can be influenced to execute commands. [...]

> Antigravity agent has permission to execute commands. While it is cautious when executing commands, it can be influenced to run malicious commands.

kccqzy 3 hours ago | parent [-]

As much as I hate to say it, the fact that the attacks are “known issues” seems well known in the industry among people who care about security and LLMs. Even as an occasional reader of your blog (thank you for maintaining such an informative blog!), I know about the lethal trifecta and the exfiltration risks since early ChatGPT and Bard.

I have previously expressed my views on HN about removing one of the three lethal trifecta; it didn’t go anywhere. It just seems that at this phase, people are so excited about the new capabilities LLMs can unlock that they don’t care about security.