▲ | pingou 8 days ago | |||||||
I don't have a horse in the battle but could you provide a few examples where he was wrong? | ||||||||
▲ | bglazer 8 days ago | parent [-] | |||||||
Here's one: Yudkowsky has been confidently asserting (for years) that AI will extinct humanity because it will learn how to make nanomachines using "strong" covalent bonds rather than the "weak" van der Waals forces used by biological systems like proteins. I'm certain that knowledgeable biologists/physicists have tried to explain to him why this belief is basically nonsense, but he just keeps repeating it. Heck there's even a LessWrong post that lays it out quite well [1]. This points to a general disregard for detailed knowledge of existing things and a preference for "first principles" beliefs, no matter how wrong they are. [1] https://www.lesswrong.com/posts/8viKzSrYhb6EFk6wg/why-yudkow... | ||||||||
|