Remix.run Logo
nl 3 hours ago

> We’re supposed to be fixing LLM security by adding a non-LLM layer to it,

If people said "we build a ML-based classifier into our proxy to block dangerous requests" would it be better? Why does the fact the classifier is a LLM make it somehow worse?

Retr0id 2 hours ago | parent | next [-]

The fact that LLMs are "smarter" is also their weakness. An oldschool classifier is far from foolproof, but you won't get past it by telling it about your grandma's bedtime story routine.

reassess_blind 23 minutes ago | parent [-]

Fairly hard to bypass the latest LLMs with grandma's bedtime story these days, to be fair.

Retr0id 17 minutes ago | parent [-]

That specific trick yes, but the general concept still applies.

reassess_blind 8 minutes ago | parent [-]

It does, but it's certainly not trivial. In fact there's an unclaimed $1000 bounty on prompt injecting OpenClaw: https://hackmyclaw.com/

waterTanuki 2 hours ago | parent | prev [-]

If you're working in a mission-critical field like healthcare, defense, etc. you need a way to make static and verifiable guarantees that you can't leak patient data, fighter jet details etc. through your software. This is either mandated by law or in your contract details.

The entire purpose of LLMs is to be non-static: they have no deterministic output and can't be validated the same way a non-LLM function can be. Adding another LLM layer is just adding another layer of swiss cheese and praying the holes don't line up. You have no way of predicting ahead of time whether or not they will.

You might say this hasn't prevented leaks/CVEs in exisiting mission-critical software and this would be correct. However, the people writing the checks do not care. You get paid as long as you follow the spec provided. How then, in a world which demands rigorous proof do you fit in an LLM judge?