| ▲ | amelius 7 hours ago | |
> These tools feel symmetric for defenders to use as well. Why? The attackers can run the defending software as well. As such they can test millions of testcases, and if one breaks through the defenses they can make it go live. | ||
| ▲ | er4hn 4 hours ago | parent | next [-] | |
Right, that's the same situation as fuzz testing today, which is why I compared it. I feel like you're gesturing towards "Attackers only need to get lucky once, defenders need to do a good job everytime" but a lot of the times when you apply techniques like fuzz testing it doesn't take a lot of effort to get good coverage. I suspect a similar situation will play out with LLM assisted attack generation. For higher value targets based on OSS, there's projects like Google Big Sleep to bring enhanced resources. | ||
| ▲ | execveat 6 hours ago | parent | prev [-] | |
Defenders have threat modeling on their side. With access to source code and design docs, configs, infra, actual requirements and ability to redesign / choose the architecture and dependencies for the job, etc - there's a lot that actually gives defending side an advantage. I'm quite optimistic about AI ultimately making systems more secure and well protected, shifting the overall balance towards the defenders. | ||