| ▲ | lxgr 7 hours ago | |||||||||||||||||||||||||||||||||||||
What I would have expected is prompt injection or other methods to get the agent to do something its user doesn't want it to, not regular "classical" attacks. At least currently, I don't think we have good ways of preventing the former, but the latter should be possible to avoid. | ||||||||||||||||||||||||||||||||||||||
| ▲ | ethin 7 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||
They are easy to avoid if you actually give a damn. Unfortunately, people who create these things don't, assuming they even know what even half of these attacks are in the first place. They just want to pump out something now now now and the mindset is "we'll figure out all the problems later, I want my cake now now now now!" Maximum velocity! Full throttle! It's just as bad as a lot of the vibe-coders I've seen. I literally saw this vibe-coder who created an app without even knowing what they wanted to create (as in, what it would do), and the AI they were using to vibe-code literally handwrote a PE parser to load DLLs instead of using LoadLibrary or delay loading. Which, really, is the natural consequence of giving someone access to software engineering tools when they don't know the first thing about it. Is that gatekeeping of a sort? Maybe, but I'd rather have that then "anyone can write software, and oh by the way this app reimplements wcslen in Rust because the vibe-coder had no idea what they were even doing". | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
| ▲ | chrisjj 7 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||
> At least currently, I don't think we have good ways of preventing the former, but the latter should be possible to avoid. Here's the thing. People who don't see a problem with the former obviously have no interest in addressing the latter. | ||||||||||||||||||||||||||||||||||||||