| ▲ | AnthonyMouse 12 hours ago | |||||||
> Are you only against AIs censoring information that's already publicly available, or are you against AIs censoring themselves when they know dangerous non-public information? Say the AI was the only thing to know the secret recipe for this WMD. Would this be like the scientist choosing not to tell everyone, or should the AI be designed to tell anyone who asks how to make a WMD? This is kind of what I mean by ridiculous hypotheticals. So you have this un-counterable yet trivial to produce WMD -- something that has never existed in all recorded history -- and an AI is the only thing that has it. This is a movie plot. Even then, are you sure the answer should be "never tell anyone"? This is a computer running code to process data. It has no means to know who you are or what your intentions are. You could be the scientist who needs the formula to devise an antidote because the thing has already been released. "A computer can never be held accountable, therefore a computer must never make a management decision." It's not the machine's job to choose for you. It's frequently in error and it's not supposed to be in charge. > This argument seems confused, because you're trying to assert that prohibiting censorship is okay because these dangerous scenarios will never happen, but also that censorship should still be prohibited if such a scenario did happen. The problem comes from stipulating that something with a negligible probability has a high probability. Suppose I say we should make mass transit free; no fares for anyone. You bring me the hypothetical that Hitler is on his way to acquire plutonium and he doesn't have bus fare, so the only thing preventing him from getting there is the bus driver turning him away for having nothing in his pockets. Then you ask if I still think we shouldn't charge fares to anyone. And the answer is still yes, because you still have to make the decision ahead of time when the plausibility of that is still negligible. It's theoretically possible that any given choice could result in Armageddon via the butterfly effect. If you stipulate that that's what happens then obviously that's not what anybody wants, but it's also a thing that only happens in the implausible hypothetical. And if you're in a hypothetical then you can also hypothesize your way out of it. What if it's a sting and the allies are waiting for him at the plutonium factory, and he needs to get on the bus or you're depriving them of their only chance to kill Hitler? Unless you stipulate that the tragedy is unavoidable given the decision, which is just assuming the conclusion. | ||||||||
| ▲ | nearbuy 11 hours ago | parent [-] | |||||||
> The problem comes from stipulating that something with a negligible probability has a high probability. We are not doing so, and I don't know how I could have been more clear that we are not saying this hypothetical will happen. Would it help if the hypothetical was that the AI knows a magic spell that blows up the Earth? It's a simple question. Would you think AI censorship is acceptable if the information actually were dangerous? Don't tell me why the hypothetical is impossible because that's entirely missing the point. I don't know what your position is, and so I don't know what you're arguing for. I don't know if you consider freedom of information to be a terminal virtue, or if you think it's good only when the consequences are good. Telling me the hypothetical won't happen doesn't clarify anything; I already know that. You can have the view that we only want freedom of information when it causes net good, and that it always causes net good. Or maybe you have the view that freedom of information is always virtuous and we shouldn't consider the consequences. Or maybe something else. Until you clarify your view, I don't know if/what we disagree about. | ||||||||
| ||||||||