Remix.run Logo
anonymous908213 7 hours ago

They can't both be true if we're talking about the premise of the article, which is the subject of the headline and expounded upon prominently in the body:

  The Industrialisation of Intrusion

  By ‘industrialisation’ I mean that the ability of an organisation to complete a task will be limited by the number of tokens they can throw at that task. In order for a task to be ‘industrialised’ in this way it needs two things:

  An LLM-based agent must be able to search the solution space. It must have an environment in which to operate, appropriate tools, and not require human assistance. The ability to do true ‘search’, and cover more of the solution space as more tokens are spent also requires some baseline capability from the model to process information, react to it, and make sensible decisions that move the search forward. It looks like Opus 4.5 and GPT-5.2 possess this in my experiments. It will be interesting to see how they do against a much larger space, like v8 or Firefox.
  The agent must have some way to verify its solution. The verifier needs to be accurate, fast and again not involve a human.
"The results are contigent upon the human" and "this does the thing without a human involved" are incompatible. Given what we've seen from incompetent humans using the tools to spam bug bounty programs with absolute garbage, it seems the premise of the article is clearly factually incorrect. They cite their own experiment as evidence for not needing human expertise, but it is likely that their expertise was in fact involved in designing the experiment[1]. They also cite OpenAI's own claims as their other piece of evidence for this theory, which is worth about as much as a scrap of toilet paper given the extremely strong economic incentives OpenAI has to exaggerate the capabilities of their software.

[1] If their experiment even demonstrates what it purports to demonstrate. For anyone to give this article any credence, the exploit really needs to be independently verified that it is what they say it is and that it was achieved the way they say it was achieved.

adw 6 hours ago | parent | next [-]

What this is saying is "you need an objective criterion you can use as a success metric" (aka a verifiable reward in RL terms). "Design of verifiers" is a specific form of domain expertise.

This applies to exploits, but it applies _extremely_ generally.

The increased interest in TLA+, Lean, etc comes from the same place; these are languages which are well suited to expressing deterministic success criteria, and it appears that (for a very wide range of problems across the whole of software) given a clear enough, verifiable enough objective, you can point the money cannon at it until the problem is solved.

The economic consequences of that are going to be very interesting indeed.

IanCal 6 hours ago | parent | prev | next [-]

A few points:

1. I think you have mixed up assistance and expertise. They talk about not needing a human in the loop for verification and to continue search but not about initial starts. Those are quite different. One well specified task can be attempted many times, and the skill sets are overlapping but not identical.

2. The article is about where they may get to rather than just what they are capable of now.

3. There’s no conflict between the idea that 10 parallel agents of the top models can mostly have one that successfully exploits a vulnerability - gated on an actual test that the exploit works - with feedback and iteration BUT random models pointed at arbitrary code without a good spec and without the ability to run code, and just run once, will generate lower quality results.

simonw 7 hours ago | parent | prev | next [-]

My expectation is that any organization that attempts this will need subject matter experts to both setup and run the swarm of exploit finding agents for them.

GaggiX 7 hours ago | parent | prev [-]

After setting the environment and the verifier you can spawn as many agents as you want until the conditions are met, this is only possible because they run without human assistance, that's the "industrialisation".