▲ | bondarchuk 6 days ago | |||||||||||||||||||||||||
I think the big difference between our views is that you are taking the rationalist argument to be "from intelligence follows malice, therefore it will want to kill us all" whereas I take it to be "from intelligence follows great capability and no morality, therefore it may or may not kill us uncaringly in pursuit of other goals". | ||||||||||||||||||||||||||
▲ | godelski 6 days ago | parent [-] | |||||||||||||||||||||||||
I think they say P(doom) is high number[0]. Or in other words, AGI is likely to kill us. I interpret this as "if we make a really intelligent machine it is very likely to kill us all." My interpretation is mainly biased on them saying "if we build a really intelligent machine, it is very likely to kill us all."Yud literally wrote a book titled "If Anyone Builds It, Everyone Dies."[1] There's not much room for ambiguity here... [0] Yud is on the record saying at least 95% https://pauseai.info/pdoom He also said anyone with a higher P(doom) than him is crazy so I think that says a lot... | ||||||||||||||||||||||||||
|