| ▲ | Dumblydorr 12 hours ago | |||||||||||||||||||||||||||||||
What would AGI actually mean for security? Does it heavily favor attackers or defenders? Even LLM, it may not help much in defense but it could teach attackers a lot right? What if employees gave the LLM info during their use that attackers could then get re-fed and study? | ||||||||||||||||||||||||||||||||
| ▲ | ACCount37 11 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
AGI favors attackers initially. Because while it can be used defensively, to preemptively scan for vulns, harden exposed software for cheaper and monitor the networks for intrusion at all times, how many companies are going to start doing that fast enough to counter the cutting edge AGI-enabled attackers probing every piece of their infra for vulns at scale? It's like a very very big fat stack of zero days leaking to the public. Sure, they'll all get fixed eventually, and everyone will update, eventually. But until that happens, the usual suspects are going to have a field day. It may come to favor defense in the long term. But it's AGI. If that tech lands, the "long term" may not exist. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | HarHarVeryFunny 12 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
At the end of the day AI at any level of capability is just automation - the machine doing something instead of a person. Arguably this may change in the far distant future if we ever build something of significantly greater intelligence, or just capability, than a human, but today's AI is struggling to draw clock faces, so not quite there yet... The thing with automation is that it can be scaled, which I would say favors the attacker, at least at this stage of the arms race - they can launch thousands of hacking/vulnerability attacks against thousands of targets, looking for that one chink in the armor. I suppose the defenders could do the exact same thing though - use this kind of automation to find their own vulnerabilities before the bad guys do. Not every corporation, and probably extremely few, would have the skills to do this though, so one could imagine some government group (part of DHS?) set up to probe security/vulnerability of US companies, requiring opt-in from the companies perhaps? | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | CuriouslyC 12 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
IMO AI favors attackers more than defenders, since it's cost prohibitive for defenders to code scan every version of every piece of software you use routinely for exploits, but not for attackers. Also, social exploits are time consuming, and AI is quite good at automating them, and these can take place outside your security perimeter, so you'll have no way of knowing. | ||||||||||||||||||||||||||||||||
| ▲ | 11 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
| [deleted] | ||||||||||||||||||||||||||||||||
| ▲ | intended 11 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||
There’s a report with Bruce Schneier that estimates GenAI tools have increased the profitability of phishing significantly [1]. They create emails with higher click through rates, and reduce the cost of delivering them. Groups which were too unprofitable to target before, are now profitable. | ||||||||||||||||||||||||||||||||