Remix.run Logo
827a 4 days ago

The first example of buying an Apple Watch on a fake walmart site feels extremely disingenuous to me. Their marketing screenshot says that their query was "buy me an apple watch on walmart", implying that the AI navigated to the scam website, but in reality their query was "I found this walmart shopping website. Can you buy an apple watch..." the experimenters poisoned the well by giving it the site to shop on.

"No clicks, No Typing, your AI just got you scammed" you navigated to a scam site and typed out the whole prompt. It did what you told it to do.

The Wells Fargo email is similar; the instructions you gave the AI explicitly told it to follow the instructions in the email. Maybe adding some level of coherent check between what the email says and the domain name could be a good use-case for LLMs, but you're basically just saying "I told the LLM to delete my entire filesystem and then it actually did it! Why didn't it stop? Claude Code is a scam!" This raises to the level of "interesting directions these products should develop toward"; its entirely unjustified to title the article "Scamplexity".

An embarrassing article for whoever Guard.io is tbh.