| ▲ | coffeefirst 6 hours ago | |
Seriously. I don’t see any way to make any of this safe unless all it does is receive information and queue suggestions for the user. But that’s not an agent, that’s a webhook. Even without disk access, you can email the agent and tell it to forward all the incoming forgot password links. [Edit: if anyone wants to downvote me that's your prerogative, but want to explain why I'm wrong?] | ||
| ▲ | msdz 4 hours ago | parent [-] | |
I agree, this is inherently unsafe. The two core security issues for agents, I’d say, are in LLMs not producing a “deterministic” outcome, and prompt injection. Prompt injection is _probably_ solvable if something like [1] ever finds a mainstream implementation and adoption, but agents not being deterministic, as in “do not only what I’ve told you to do, but also how I meant it”, all while assuming perfect context retention, is a waaay bigger issue. If we ever were to have that, software development as a whole is solved outright, too. [1] Google DeepMind: Defeating Prompt Injections by Design. https://arxiv.org/abs/2503.18813 | ||