Remix.run Logo
btown 11 hours ago

The problem, though, is that this turns "one of our developers was hit by a supply chain attack that never hit prod, we wiped their computer and rotated keys, and it's not like we're a big target for the attacker to make much use of anything they exfiltrated..." into "now our entire source code has been exfiltrated and, even with rudimentary line-by-line scanning, will be automatically audited for privilege escalation opportunities within hours."

Taken to an extreme, the end result is a dark forest. I don't like what that means for entrepreneurship generally.

linkregister 10 hours ago | parent | next [-]

This is a great example of vulnerability chains that can be broken by vulnerability scanning by even cheaper open source models. The outcome of a developer getting pwned doesn't have to lead to total catastrophe. Having trivial privilege escalations closed off means an attacker will need to be noisy and set off commodity alerting. The will of the company to implement fixes for the 100 Github dependabot alerts on their code base is all that blocks these entrepreneurs.

It does mean that the hoped-for 10x productivity increase from engineers using LLMs is eroded by the increased need for extra time for security.

This take is not theoretical. I am working on this effort currently.

pixl97 8 hours ago | parent | next [-]

I disagree that it's extra time for security, it's the time we should have been spending in the first place.

fragmede 5 hours ago | parent | prev [-]

It's great news for developers. Extra spend on a development/test env so dev have no prod access, prod has no ssh access; and SREs get two laptops, with the second one being a Chromebook that only pulls credentials when it's absolutely necessary.

linkregister 2 hours ago | parent [-]

Yes, having a good development env with synthetic data, and an inaccessible, secure prod env just got justification. I never considered the secondary SRE laptop but I think it might be a good idea.

eru 6 hours ago | parent | prev [-]

> Taken to an extreme, the end result is a dark forest.

Sorry, how does that work?

bryanrasmussen 2 hours ago | parent [-]

since the suggestion is that the new security bug finding LLMs will increase protection because it will have access to the full source code then, the dark forest fear would be, if it is possible for an attacker to get all the source the attacker will be in a better position.

This seems wrong however, as it ignores the arrow of time. The full source code has been scanned and fixed for things that LLMs can find before hitting production, anyone exfiltrating your codebase can only find holes in stuff with their models that is available via production for them to attack and that your models for some reason did not find.

I don't think there is any reason to suppose non-nation state actors will have better models available to them and thus it is not a dark forest, as nation states will probably limit their attacks to specific things, thus most companies if they secure their codebase using LLMs built for it will probably be at a significantly more secure position than nowadays and, I would think, the golden age of criminal hacking is drawing to a close. This assume companies smart enough to do this however.

Furthermore, the worry about nation state attackers still assumes that they will have better models and not sure if that is likely either.

staplers 40 minutes ago | parent [-]

  I would think, the golden age of criminal hacking is drawing to a close. This assume companies smart enough to do this however.
It's rarely the systems that are the weak link, rather the humans with backdoor access.