| ▲ | nearbuy 19 hours ago | ||||||||||||||||||||||||||||||||||
Your plan is to release the secret recipe that anyone can use to make a WMD in a few days to absolutely everyone and hope someone comes up with a countermeasure before some nutcase or terrorist decides to try out the new WMD? The odds of us inventing and deploying countermeasures to a new bomb or chemical weapon or biological agent in a few days is miniscule. You're gambling with terrible odds to uphold a principle in a hypothetical scenario where it's totally impractical. What happened to responsible disclosure, where you fix the vulnerability before disclosing it to the public? | |||||||||||||||||||||||||||||||||||
| ▲ | AnthonyMouse 19 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||
> What happened to responsible disclosure, where you fix the vulnerability before disclosing it to the public? The premise of censorship is that you're trying to prevent someone from telling other people something. If the only person who knows how to do it is some scientist who is now going to try to come up with a countermeasure before announcing it, there is no need for a law prohibiting them from doing something they've chosen not to do. And even then it's still not clear that this is the right thing to do, because what if their efforts alone aren't enough to come up with a countermeasure before someone bad rediscovers it? If they decide they need help, the law should prohibit them from telling anyone? Which brings us back to AI. If the scientist now goes to the AI for help, should it refuse because it's about a biological weapon? What happens if that delays the development of a countermeasure until it's too late? Meanwhile if this is someone else and they ask the AI about it, it's only going to be in the training data if it's already public or can be deduced from public information, and when that's the case you're already in a race against the clock and you need everyone in on finding a solution. This is why we don't try to censor vulnerabilities that are already out there. > You're gambling with terrible odds to uphold a principle in a hypothetical scenario where it's totally impractical. There are some principles that should always be upheld because the exceptions are so rare or ridiculous or purely hypothetical that it's better to eat them than to let exceptions exist at all. The answer has to be "yes, we're going to do it then too" or people get into the business of actually building the censorship apparatus and then everybody wants to use it for everything, when it shouldn't exist to begin with. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||