| |
| ▲ | lan321 5 hours ago | parent | next [-] | | TBH if someone discovers how to easily make garage WMDs we're fucked either way. That shit will leak and it will go into mass production by states and individuals. Especially in countries with tight gun control, (organized) crime will get a massive overnight buff. | | |
| ▲ | nearbuy an hour ago | parent [-] | | Likely it'll leak or be rediscovered eventually. But not every trade secret gets leaked. Most responsibly disclosed software vulnerabilities aren't exploited (to our knowledge) before a fix is released. If the discovery isn't obvious, you have decent odds of keeping it secret for a while. My point was just that nukes are a bad example of information that needs to be restricted to prevent harm. |
| |
| ▲ | AnthonyMouse 20 hours ago | parent | prev | next [-] | | > It would be a very different story if someone discovered a new type of WMD that anyone could make in a few days from commonly available materials, if only they knew the secret recipe. It would need even more to be public. Suppose it was easy to make a biological weapon. You wouldn't be able to effectively censor it anyway and trying to would leave you sitting on an apocalypse bomb waiting for it to leak to someone nefarious or get independently rediscovered before anyone else is allowed to discuss it. What you need is for knowledge of how it works to be public so that everyone can join in the effort to quickly devise countermeasures before some nutcase destroys the world. Moreover, if something is already public enough to be in the AI training data then it's already public. | | |
| ▲ | nearbuy 19 hours ago | parent [-] | | Your plan is to release the secret recipe that anyone can use to make a WMD in a few days to absolutely everyone and hope someone comes up with a countermeasure before some nutcase or terrorist decides to try out the new WMD? The odds of us inventing and deploying countermeasures to a new bomb or chemical weapon or biological agent in a few days is miniscule. You're gambling with terrible odds to uphold a principle in a hypothetical scenario where it's totally impractical. What happened to responsible disclosure, where you fix the vulnerability before disclosing it to the public? | | |
| ▲ | AnthonyMouse 19 hours ago | parent [-] | | > What happened to responsible disclosure, where you fix the vulnerability before disclosing it to the public? The premise of censorship is that you're trying to prevent someone from telling other people something. If the only person who knows how to do it is some scientist who is now going to try to come up with a countermeasure before announcing it, there is no need for a law prohibiting them from doing something they've chosen not to do. And even then it's still not clear that this is the right thing to do, because what if their efforts alone aren't enough to come up with a countermeasure before someone bad rediscovers it? If they decide they need help, the law should prohibit them from telling anyone? Which brings us back to AI. If the scientist now goes to the AI for help, should it refuse because it's about a biological weapon? What happens if that delays the development of a countermeasure until it's too late? Meanwhile if this is someone else and they ask the AI about it, it's only going to be in the training data if it's already public or can be deduced from public information, and when that's the case you're already in a race against the clock and you need everyone in on finding a solution. This is why we don't try to censor vulnerabilities that are already out there. > You're gambling with terrible odds to uphold a principle in a hypothetical scenario where it's totally impractical. There are some principles that should always be upheld because the exceptions are so rare or ridiculous or purely hypothetical that it's better to eat them than to let exceptions exist at all. The answer has to be "yes, we're going to do it then too" or people get into the business of actually building the censorship apparatus and then everybody wants to use it for everything, when it shouldn't exist to begin with. | | |
| ▲ | nearbuy 14 hours ago | parent [-] | | > The premise of censorship is that you're trying to prevent someone from telling other people something... So you're not against individuals self-censoring for public safety, but you're against companies censoring their AIs for public safety. Are you only against AIs censoring information that's already publicly available, or are you against AIs censoring themselves when they know dangerous non-public information? Say the AI was the only thing to know the secret recipe for this WMD. Would this be like the scientist choosing not to tell everyone, or should the AI be designed to tell anyone who asks how to make a WMD? > There are some principles that should always be upheld because the exceptions are so rare or ridiculous or purely hypothetical... We're using hypotheticals to clarify the view you're trying to express, not because we think they will happen. And it seems you're expressing an that prohibiting AI censorship should be an absolute rule, even in the hypothetical case where not censoring AI has a 95% chance of wiping out humanity. This argument seems confused, because you're trying to assert that prohibiting censorship is okay because these dangerous scenarios will never happen, but also that censorship should still be prohibited if such a scenario did happen. If you truly believe the latter, the first assertion is not actually a factor, since you're against censorship even if a dangerous scenario like the one above did happen. And if you truly believe the former, you should be able to say you're against censorship in what you consider to be plausible scenarios, but would be in favor if, hypothetically, there were a great enough danger. Then the discussion would be about whether there are realistic scenarios where lack of censorship is dangerous. | | |
| ▲ | AnthonyMouse 12 hours ago | parent [-] | | > Are you only against AIs censoring information that's already publicly available, or are you against AIs censoring themselves when they know dangerous non-public information? Say the AI was the only thing to know the secret recipe for this WMD. Would this be like the scientist choosing not to tell everyone, or should the AI be designed to tell anyone who asks how to make a WMD? This is kind of what I mean by ridiculous hypotheticals. So you have this un-counterable yet trivial to produce WMD -- something that has never existed in all recorded history -- and an AI is the only thing that has it. This is a movie plot. Even then, are you sure the answer should be "never tell anyone"? This is a computer running code to process data. It has no means to know who you are or what your intentions are. You could be the scientist who needs the formula to devise an antidote because the thing has already been released. "A computer can never be held accountable, therefore a computer must never make a management decision." It's not the machine's job to choose for you. It's frequently in error and it's not supposed to be in charge. > This argument seems confused, because you're trying to assert that prohibiting censorship is okay because these dangerous scenarios will never happen, but also that censorship should still be prohibited if such a scenario did happen. The problem comes from stipulating that something with a negligible probability has a high probability. Suppose I say we should make mass transit free; no fares for anyone. You bring me the hypothetical that Hitler is on his way to acquire plutonium and he doesn't have bus fare, so the only thing preventing him from getting there is the bus driver turning him away for having nothing in his pockets. Then you ask if I still think we shouldn't charge fares to anyone. And the answer is still yes, because you still have to make the decision ahead of time when the plausibility of that is still negligible. It's theoretically possible that any given choice could result in Armageddon via the butterfly effect. If you stipulate that that's what happens then obviously that's not what anybody wants, but it's also a thing that only happens in the implausible hypothetical. And if you're in a hypothetical then you can also hypothesize your way out of it. What if it's a sting and the allies are waiting for him at the plutonium factory, and he needs to get on the bus or you're depriving them of their only chance to kill Hitler? Unless you stipulate that the tragedy is unavoidable given the decision, which is just assuming the conclusion. | | |
| ▲ | nearbuy 11 hours ago | parent [-] | | > The problem comes from stipulating that something with a negligible probability has a high probability. We are not doing so, and I don't know how I could have been more clear that we are not saying this hypothetical will happen. Would it help if the hypothetical was that the AI knows a magic spell that blows up the Earth? It's a simple question. Would you think AI censorship is acceptable if the information actually were dangerous? Don't tell me why the hypothetical is impossible because that's entirely missing the point. I don't know what your position is, and so I don't know what you're arguing for. I don't know if you consider freedom of information to be a terminal virtue, or if you think it's good only when the consequences are good. Telling me the hypothetical won't happen doesn't clarify anything; I already know that. You can have the view that we only want freedom of information when it causes net good, and that it always causes net good. Or maybe you have the view that freedom of information is always virtuous and we shouldn't consider the consequences. Or maybe something else. Until you clarify your view, I don't know if/what we disagree about. | | |
| ▲ | AnthonyMouse 10 hours ago | parent [-] | | Hypotheticals like that are uninteresting because there are only two ways it can go. The first is that you can find a way out of it, and then you say, do we need the magic spell for anything? Is knowing about it useful to preventing it from being used? Then people need to know. The second is that you're stipulating the information being available is going to destroy the world with high probability and no possible means of mitigating it. Then anything else gets drowned out by the end of the world, but only because you're stipulating the outcome. Which you can't do in real life, not just because the real probability of the hypothetical is so low but because there isn't anyone who can be trusted not to fudge the numbers when they want to censor something. Should it be censored if there is an absolute certainty it will destroy the world? There isn't much room to move in that one. Should it be censored because somebody claims it's really bad? Nope, because it's way more likely that they're full of crap than that it's actually going to destroy the world. |
|
|
|
|
|
| |
| ▲ | Y_Y 21 hours ago | parent | prev [-] | | Not quite a nuke (just try obtaining enough uranium ore) but there are some fairly dangerous things a determined nutcase can make without drawing suspicion. Example determined ned nutcases include Aum Shinrikyo, who tried anthrax, botox, and nukes before succeeding with sarin gas (thank IG Farben!) among other things. It's a fascinating (if troubling) story: https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack#Back... |
|