| ▲ | jerf 8 days ago |
| This isn't really that important. I don't care if the probe is here because of magh'Kveh or because its creators are really motivated to zzzzssszsezesszzesz. What I care about is whether it's going to be benign (which includes just cruising through doing nothing) or malevolent to me. I don't even care if the aliens think they are doing us a favor by coming to a screeching halt, going full-bore at Earth, and converting our ecosystem into a completely different one that they think is "better" for whatever reason. However gurgurvivick that makes them feel, I'm going to classify that as a malign act and take appropriate action... because what else can I even do? And from that perspective, "benign" and "malign" aren't that hard to pick up on. They are relative to humanity, and there is nothing wrong with that. In fact it would be pathological to not care about how the intentions are relative to their effect on humanity. Whatever happens, it's not like we can actually cause an interstellar incident at this phase of our development. Anything that they would interpret as an interstellar incident they were going to anyhow (e.g. "how dare you prevent our probe from eliminating your species?") and that responsibility is on them, not us. You can't blame a toddler that can barely tie their shoelaces for international incidents, likewise for us and interstellar incidents. |
|
| ▲ | anigbrowl 8 days ago | parent | next [-] |
| Whatever happens, it's not like we can actually cause an interstellar incident at this phase of our development. What if we have inadvertently caused tremendous offense via our radio/television/planetary radar signals |
|
| ▲ | sebastiennight 8 days ago | parent | prev | next [-] |
| One problem with your assumption here is that "humanity" has no definition of "benign" and "malign". If we did have such a thing, extrapolated coherent volition would be solved and that would solve half of the AI alignment problem. This hypothetical "alien" problem is actually pretty much equivalent to the AI alignment problem. One half is, we don't know what we want, and the other half is, even if we knew... we don't know how to make "them" do what we want. |
| |
| ▲ | jerf 8 days ago | parent | next [-] | | Sure, and I can't figure out whether the guy who is letting me in to traffic instead of cutting me off is malign or benign, because I lack a definition of those words. Alas, I am doomed to infinite confusion forever. It's very fashionable to confuse the inability to draw bright shining lines as being unable to define a thing at all, but I don't have much respect for that attitude. Of all the outcomes, "the probe engages in indefinite behavior that we are never able to classify as 'humanly benign' or 'humanly malign'" is such a low percentage that it's something I'll worry about when it happens. The world is full of concepts we can't draw bright shining lines through. In fact the ones we can are the exceptions. We manage to have definitions even so. | | |
| ▲ | sebastiennight 5 days ago | parent [-] | | The probe comes in, observes half a dozen major armed conflict areas on our planet, and solves the problem by entirely disintegrating all weapons on one side of each conflict with no loss of life (but leaving the other side's weapons untouched). 1. Would your assessment of "malign vs benign" depend on knowing which side was disarmed for each conflict, or can you already make an assessment without that information? 2. Do you estimate that the other 8 billion humans surely agree with your response to #1? |
| |
| ▲ | marcus_holmes 8 days ago | parent | prev | next [-] | | > One problem with your assumption here is that "humanity" has no definition of "benign" and "malign". Agreed. One can think of any number of actions that would be impossible to rate on a benign/malign scale. E.g. as a trivial example: aliens destroy 80% of humanity, which leads to restoration of Earth ecosystems and prevention of the inevitable future war that would destroy 100% of humanity; in 100 years humanity is in a much better position than it would have been if left alone [0] [1] And that doesn't even include intentions. We often do bad things for good reasons, with good intentions. Malignity includes or infers the intention to cause harm. That may not be present, or the intention may have been benign. Morality is complicated and subjective. Even judging the outcome of an action as positive or negative is complicated and subjective. [0] I don't really want to argue whether this is true, possible, etc. Pick your own variant of example where a seemingly-malign action is actually benign in the long term. [1] Also raises the problem of estimating "better" in this context. Exercise left for the reader. | | |
| ▲ | Timwi 5 days ago | parent [-] | | > Pick your own variant of example where a seemingly-malign action is actually benign in the long term. Parents like to believe that all of their seemingly-malignant actions towards their children are actually that. In reality, they only sometimes are, and it's impossible to tell in advance which ones. |
| |
| ▲ | alariccole 8 days ago | parent | prev | next [-] | | I feel confident that we do. | |
| ▲ | cindyllm 8 days ago | parent | prev [-] | | [dead] |
|
|
| ▲ | nathan_compton 8 days ago | parent | prev [-] |
| > and converting our ecosystem into a completely different one that they think is "better" for whatever reason. You could theoretically be convinced that they are right and resign yourself to death. |