▲ | ben_w 9 hours ago | |
> But you do need some kind of base knowledge, if you want to talk about this. Otherwise you're saying "what if we create God". And last time I checked it wasn't possible. The base knowledge is game theory, not quite the same focus as the maths used to build an AI. And the problem isn't limited to "build god" — hence my examples of cobra effect, in which humans bred snakes because they were following the natural incentives of laws made by other humans who didn't see what would happen until it was so late that even cancelling the laws resulted in more snakes than they started with. > And what's with the existential risk obsession? That's like a bad retelling of the Pascal bet on the existence of God. And every "be careful what you wish for" story. Is climate change a potentially existential threat? Is global thermonuclear war a potentially existential threat? Are pandemics, both those from lab leaks and those evolving naturally in wet markets, potentially existential threats? The answer to all is "yes", even though these are systems with humans in the loop. (Even wet markets: people have been calling for better controls of them since well before Covid). AI is automation. Automation has bugs. If the automation has a lot of bugs, you've got humans constantly checking things, despite which errors still gets past QA from time to time. If it's perfect automation, you wouldn't have to check it… but nobody knows how to do perfect automation. "Perfect" automation would be god-like, but just as humans keep mistaking natural phenomena for deities, an AI doesn't have to actually be perfect for humans to set it running without checking the output and then be surprised when it all goes wrong. A decade ago the mistakes were companies doing blind dictionary merges on "Keep Calm and …" T-shirts, today it's LLMs giving legal advice (and perhaps writing US trade plans). They (the humans) shouldn't be doing those things, but they do them anyway, because humans are like that. | ||
▲ | amarcheschi 9 hours ago | parent [-] | |
My issue is not related to studying ai risk, my issue is empowering people who don't have formal education in anything related to ai. And yes, you need some math background otherwise you end up like yudkowski saying 3 years ago we all might be dead by now or next year. Or the use of bayesian probability in such a way thay makes you think they should have used their time better and followed a statistics course. There are ai researchers, serious ones, studying ai risk, and i don't see anything wrong in that. But of course, their claims and papers are way less, less alarmistic than the ai doomerism present in those circles. And one thing they sound the alarm on is the doomerism and the tescreal movement and ideals proposed by the aforementioned alexander, yud, bostrom ecc |