▲ | cjonas 4 days ago | |||||||
If you only give the AI the ability to do what the end user can already do, the risk is extremely low. It's essential no different then building a static web app where the client is connected to API for all operations. It basically just becomes a new way to interface into a application. However... That's not how a lot of people are building. Giving an agentic system sensitive information (like passwords, credit cards) and then opening it up to the entire internet as a source for input as asking for your info to be stolen. It'd be like asking your grandma with dementia to manage all your email and online banking. | ||||||||
▲ | acdha 4 days ago | parent | next [-] | |||||||
> If you only give the AI the ability to do what the end user can already do, the risk is extremely low. Just because I can send my money to Belize doesn’t mean it’s safe to give an LLM the ability to do the same. Until there’s a huge breakthrough on actual intelligence giving an LLM attacker controlled inputs is an inherently high-risk activity. | ||||||||
| ||||||||
▲ | cjonas 4 days ago | parent | prev [-] | |||||||
I'll also add the problem in the article seems pretty solvable by allowing user to scope the agentic capabilities to specific websites ( eg "walmart.com:allow_cc,allow_adress"). |