Remix.run Logo
gpi 10 hours ago

The below amendment from the anthropic blog page is telling.

Edited November 14 2025:

Added an additional hyperlink to the full report in the initial section

Corrected an error about the speed of the attack: not "thousands of requests per second" but "thousands of requests, often multiple per second"

wging 6 hours ago | parent | next [-]

> The operational tempo achieved proves the use of an autonomous model rather than interactive assistance. Peak activity included thousands of requests, representing sustained request rates of multiple operations per second.

The assumption that no human could ever (program a computer to) do multiple things per second, nor have their code do different things depending on the result of the previous request is... interesting.

(observation is not original to me, it was someone on Twitter who pointed it out)

sublimefire 4 hours ago | parent [-]

Great point, it might be just pure ignorance. Even OSS pentesting tooling such as metasploitable have great capabilities. I see how LLM could be leveraged to build custom modules on top of those tools or how can you add basic LLM “decision” making, but this is just another additive tool in the chain.

AstroBen 8 hours ago | parent | prev [-]

There is absolutely no way a technical person would mix those up

wonnage an hour ago | parent | next [-]

But what about an ML person roped into writing an AI assisted blogpost about security

edanm 4 hours ago | parent | prev [-]

Right! It's well known that technical people never make mistakes.

SiempreViernes 3 hours ago | parent | next [-]

I think the expectation is more that serious people have their work checked over by other serious people to catch the obvious mistakes.

3 hours ago | parent | prev [-]
[deleted]