| ▲ | WatchDog 4 hours ago | |
TLDR: They generated some phishing emails using LLMs, they sent the emails to 108 elderly people who agreed to be in a study, 11% of the recipients clicked a link. Generating a phishing email isn't very difficult to do, with or without an LLM, and claiming that because someone clicked on a link, they were "compromised" seems disingenuous. More interesting to me, is using LLMs in multi-turn phishing correspondence with victims, the paper mentions this in the discussion, but it isn't something that they appear to have actually tested. | ||
| ▲ | DalasNoin 4 hours ago | parent [-] | |
(author here) I think it is interesting to see that models like gemini will do basically whatever you want. this study was mainly designed to help an otherwise mostly anecdotal investigative report on AI scams targeting seniors. We have also worked on other stuff like voice scams or using AI research for hyper personalized phishing: https://www.lesswrong.com/posts/GCHyDKfPXa5qsG2cP/human-stud... | ||