Remix.run Logo
yorwba 4 days ago

The flaw isn't just in the design, it's in the requirements. People want an AI that reads text they didn't read and does the things the text says need to be done, because they don't want to do those things themselves. And they don't want to have to manually approve every little action the AI takes, because that would be too slow. So we get the equivalent of clicking "OK" on every dialog that pops up without reading it, which is also something that people often do to save a bit of time.

layer8 4 days ago | parent [-]

This isn’t a problem with human assistants, so it can’t be a fundamental problem of requirements.

tsimionescu 4 days ago | parent [-]

It absolutely is a problem with human assistants (though, of course, those are currently much smarter). But people can and have scammed assistants to steal money or personal details from their bosses. Phishing and social engineering are exactly forms of this same vulnerability. Of course, human assistants are smart enough to not get phished by, say, reading a book that happens to contain phrases that are similar to commands that their boss could give them, but that's just the current difference of intelligence and the hugely larger context windows humans still have compared to LLMs.