Remix.run Logo
Buttons840 5 days ago

Sure, we should avoid people purposely doing harmful things, but they should be given the benefit of the doubt unless it can be proven they were intentionally doing harm beyond just testing the security.

One thing that is not a good option is the status-quo we're discussing here, in which a "bumbling idiot" can take down a city power grid. If that's how things are, the we shouldn't cower and hope we remain safe from every idiot out there, we need to shake things up and find the problems now. Hopefully without actually taking out any power grid.

kube-system 4 days ago | parent [-]

People accidentally doing harm can cause significant problems too -- that's why many professions require licensing and we don't let random people practice medicine, even if they have good intentions.

The problem here is that most security testing is not just the hollywood narrative of "some people running nmap and finding critical vulnerabilities that take down the power grid". Plenty of the real-world security vulnerabilities in large-scale systems that do exist are at the interface between technology and humans, and those are the vulnerabilities that computer science often can't reasonably fix: social engineering, trust systems, physical-layer exploits, etc.

In securing any large system, there are going to be many low-impact issues that do exist but aren't necessarily important (or even desirable) to fix because the impact to fix them is too high, and the likelihood of exploit is low because it is impractical as an attack vector. But legalizing the exploit of these edge cases would guarantee you'd see issues, because you're creating a financial opportunity where there was previously not one.

For example: we don't need to incentivize a wave of thousands of script kiddies fiddling with their power meters, trying to social engineer support staff, running DoS scripts against the public website, etc. Those things aren't helpful in improving critical infrastructure, they're just going to cause a nuisance and make things difficult for people.

Buttons840 4 days ago | parent [-]

DDoS is not valid security research, it's just destruction.

Also, we need to clarify the scenario because you said:

> the likelihood of exploit is low

but you also mention the need to stop people "accidentally" exploiting the system, so which is it?

A system that can be accidentally broken by bumbling idiots does not deserve protection IMO.

4 days ago | parent | next [-]
[deleted]
kube-system 4 days ago | parent | prev [-]

> DDoS is not valid security research, it's just destruction.

I didn't say anything about DDoS in my comment. DoS is a term referring to a loss of availability. Availability is one of the three fundamental parts to the CIA triad, so yes, it is absolutely something security researchers evaluate.

> Also, we need to clarify the scenario because you said:

> the likelihood of exploit is low

> but you also mention the need to stop people "accidentally" exploiting the system, so which is it?

I said "accidentally doing harm". For a real world exploit to happen, you have to have a couple of different things align. First, you need a vulnerability. Second, you need some way that somebody could exploit that vulnerability. Third, you need a reason that somebody's going to do it. A vulnerability simply existing isn't enough to make it a problem.

Now, in an academic lab environment, most people don't really care about the likelihood of exploit or the motivations of an attacker. Because the point is academic computer science.

But the people who secure systems in the real world have to care about the likelihood of exploitations in the motivations of their attackers. Because it's not possible to secure everything in a production environment, where you also have to ensure the availability of the system and the usability of the system to your stakeholders. You always have to make a compromise between the two.

So, in the real world: the locality of the attacker, the legal environment, and the impact of the exploit all play very significant roles in how someone might weigh a significance of an exploit.

To make up a contrived example:

Let's say that all I have to do to cancel electricity service, create an online account using the information from a power bill, and press the cancel button. There's an obvious exploit here. I could dig through my neighbor's trash, get a copy of their bill, create an account, and shut off their power.

Do we wanna legalize this activity? No, I don't think so. Are we at risk of a nation state exploiting this? No, probably not because they don't have access to everyone's trash everywhere. Also, you couldn't really do this at scale because it would be obviously not intended. Should we require more authentication just to say we've plugged the hole? Also, probably not. Electricity service has to be accessible to people. We can't require onerous authentication, when many of the customers may be elderly, disabled, etc. Instead, we as a society solve this problem by making this activity a crime. In this works just fine, because anyone who has physical access is already in that legal jurisdiction as well.

I'm sure you can imagine dozens of other similar scenarios. The point is that information security is a lot more complicated than just adding authentication to a webpage. Information security isn't a technology problem. It's a people using technology problem.

I don't think we want to legalize activity similar to what is in my above scenario. That's the kind of situation where people may be accidentally causing harm that they wouldn't be doing now, because they would go to jail. But if you legalize it, people are going to do it in an attempt to monetize it.