| ▲ | brutus1213 5 hours ago | |
Apart from the actual exploit, it is intriguing to see how a security researcher can leverage an AI tool to give them an asymmetric advantage to the actual developers of the code. Devs are pretty focused on their own subsystem and it would take serendipity or a ton of experience to be able to spot such patterns. Thinking about this more .. given all the AI generated code being put into production these days (I routinely see posts of anthropic and others boast how much code is being written by AI). I can see it being much, much harder to review all the code being written by AIs. It makes a lot of sense to use an AI system to find vulnerabilities that humans don't have time to catch. | ||
| ▲ | bmit 4 hours ago | parent | next [-] | |
Looking at their website, depthfirst seems to offer an product that essentially solves this problem. | ||
| ▲ | mortsnort 4 hours ago | parent | prev [-] | |
By your logic, it would be really easy for the code creator to run an agent to find and fix exploits in their own code. | ||