| ▲ | metalcrow 3 hours ago | |
I'm kinda confused as to _what_, exactly this post is saying? Is it saying that alignment needs to be better? That seems strictly pro-safetyism. But he talks about Eliezer's ethics negatively, so does he not believe that AI is a world-ending risk? If he just believes that AI is not that dangerous and just needs some minor "correctly done" alignment i don't think his stance is meaningful as a anti-both-sides perspective because that's basically equivalent to status quo. | ||