Remix.run Logo
cobbal 8 hours ago

Wait, the only way they suggest solving the problem by rate limiting and using a better model?

Software engineers figured out these things decades ago. As a field, we already know how to do security. It's just difficult and incompatible with the careless mindset of AI products.

crazygringo 7 hours ago | parent | next [-]

> As a field, we already know how to do security.

Well, AI is part of the field now, so... no, we don't anymore.

There's nothing "careless" about AI. The fact that there's no foolproof way to distinguish instruction tokens from data tokens is not careless, it's a fundamental epistemological constraint that human communication suffers from as well.

Saying that "software engineers figured out these things decades ago" is deep hubris based on false assumptions.

NitpickLawyer 6 hours ago | parent | prev | next [-]

> As a field, we already know how to do security

Uhhh, no, we actually don't. Not when it comes to people anyway. The industry spends countless millions on trainings that more and more seem useless.

We've even had extremely competent and highly trained people fall for basic phishing (some in the recent few weeks). There was even a highly credentialed security researcher that fell for one on youtube.

simonw 5 hours ago | parent [-]

I like using Troy Hunt as an example of how even the most security conscious among us can fall for a phishing attack if we are having a bad day (he blamed jet flag fatigue): https://www.troyhunt.com/a-sneaky-phish-just-grabbed-my-mail...

rvz 7 hours ago | parent | prev [-]

> Software engineers figured out these things decades ago.

Well this is what happens when a new industry attempts to reinvent poor standards and ignores security best practices just to rush out "AI products" for the sake of it.

We have already seen how (flawed) standards like MCPs were hacked immediately from the start and the approaches developers took to "secure" them with somewhat "better prompting" which is just laughable. The worst part of all of this was almost everyone in the AI industry not questioning the security ramifications behind MCP servers having direct access to databases which is a disaster waiting to happen.

Just because you can doesn't mean you should and we are seeing how hundreds of AI products are getting breached because of this carelessness in security, even before I mentioned if the product was "vibe coded" or not.