Remix.run Logo
j2kun 11 hours ago

The article heavily quotes the "AI Security Institute" as a third-party analysis. It was the first I heard of them, so I looked up their about page, and it appears to be primarily people from the AI industry (former Deepmind/OpenAI staff, etc.), with no folks from the security industry mentioned. So while the security landscape is clearly evolving (cf. also Big Sleep and Project Zero), the conclusion of "to harden a system we need to spend more tokens" sounds like yet more AI boosting from a different angle. It raises the question of why no other alternatives (like formal verification) are mentioned in the article or the AISI report.

I wouldn't be surprised if NVIDIA picked up this talking point to sell more GPUs.

croemer 44 minutes ago | parent | next [-]

They are a UK government unit: "The AI Security Institute is a research organisation within the Department of Science, Innovation and Technology."

Unfortunately, they fit straight lines to graphs with y axis from 0 to 100% and x axis being time - which is not great. Should do logistic instead.

tptacek 11 hours ago | parent | prev | next [-]

I would be interested in which notable security researchers you can find to take the other side of this argument. I don't know anything about the "AI Security Institute", but they're saying something broadly mirrored by security researchers. From what I can see, the "debate" in the actual practitioner community is whether frontier models are merely as big a deal as fuzzing was, or something signficantly bigger. Fuzzing was a profound shift in vulnerability research.

(Fan of your writing, btw.)

j2kun 9 hours ago | parent | next [-]

It's less that I think they would take the other side of the argument, than that they would lend some credence to the content of the analysis. For example, I would not particularly trust a bunch of AI researchers to come up with a representative set of CTF tasks, which seems to be the basis of this analysis.

tptacek 8 hours ago | parent [-]

Yeah, you might be right about this particular analysis! The sense I have from talking to people at the labs is that they're really just picking deliberately diverse and high-profile targets to see what the models are capable of.

VorpalWay 10 hours ago | parent | prev [-]

> but they're saying something broadly mirrored by security researchers.

You might well be right, it is not an area I know much of or work in. But I'm a fan of reliable sources for claims. It is far to easy to make general statements on the internet that appear authorative.

8 hours ago | parent | prev | next [-]
[deleted]
ButlerianJihad 9 hours ago | parent | prev [-]

[flagged]