▲ | kube-system 4 days ago | |||||||||||||
People accidentally doing harm can cause significant problems too -- that's why many professions require licensing and we don't let random people practice medicine, even if they have good intentions. The problem here is that most security testing is not just the hollywood narrative of "some people running nmap and finding critical vulnerabilities that take down the power grid". Plenty of the real-world security vulnerabilities in large-scale systems that do exist are at the interface between technology and humans, and those are the vulnerabilities that computer science often can't reasonably fix: social engineering, trust systems, physical-layer exploits, etc. In securing any large system, there are going to be many low-impact issues that do exist but aren't necessarily important (or even desirable) to fix because the impact to fix them is too high, and the likelihood of exploit is low because it is impractical as an attack vector. But legalizing the exploit of these edge cases would guarantee you'd see issues, because you're creating a financial opportunity where there was previously not one. For example: we don't need to incentivize a wave of thousands of script kiddies fiddling with their power meters, trying to social engineer support staff, running DoS scripts against the public website, etc. Those things aren't helpful in improving critical infrastructure, they're just going to cause a nuisance and make things difficult for people. | ||||||||||||||
▲ | Buttons840 4 days ago | parent [-] | |||||||||||||
DDoS is not valid security research, it's just destruction. Also, we need to clarify the scenario because you said: > the likelihood of exploit is low but you also mention the need to stop people "accidentally" exploiting the system, so which is it? A system that can be accidentally broken by bumbling idiots does not deserve protection IMO. | ||||||||||||||
|