Remix.run Logo
pixl97 7 hours ago

This is the weirdest take I've seen.

It takes humans a very long time to learn how to code/find bugs. You just can't take any human and have them do it in a reasonable amount of time with a reasonable amount of money.

Claude is effectively automation, once you have the hardware you can run as many copies of the model as you want. Factories can build hardware far faster then they can train more people.

It's weird to see a denial of the industrial revolution on HN.

alex_young 7 hours ago | parent | next [-]

A bit uncharitable no?

I’m not denying that LLMs can be used to improve security research, suggesting that their use is wrong or anything like that.

Humans have used software to research security for a long time. AI driven SAST is clearly going to help improve productivity.

pixl97 6 hours ago | parent [-]

Quantity is a quality.

Humans burned stuff for a very long time now, it's when we started burning coal in mass industrially that the global environmental impacts started stacking up to the point of considerable damage.

tracker1 3 hours ago | parent | prev [-]

You still need people in the mix that understand the scope, scale and impact of the exploits/bugs found. Just letting agents go wild is how you get slop over time... You can probably get away without them to an extent, but I'd suggest that you're likely to increase the risk of errors and misbehavior in practice over time by not checking agent work.

Even checking human work is often a shortcoming of processes in practice.