| ▲ | amarcheschi 8 months ago | |||||||
>The question they're asking isn't about machine learning specifically, it's about the risks of generic optimisers optimising a utility function, and the difficulty of specifying a utility function in a way that doesn't have unfortunate side effects. The examples they give also work with biology (genetics and the difference between what your genes "want" and what your brain "wants") and with governance (laws and loopholes, cobra effects, etc.). But you do need some kind of base knowledge, if you want to talk about this. Otherwise you're saying "what if we create God". And last time I checked it wasn't possible. And what's with the existential risk obsession? That's like a bad retelling of the Pascal bet on the existence of God. I'm relieved that at least in italy I still have to find someone in Ai taking them into consideration for more than a few minutes during an ethics course (with students sneering at the ideas of bostrom possible futures), and again, it's held by a professor with no technical knowledge with whom i often disagree due to this | ||||||||
| ▲ | ben_w 8 months ago | parent [-] | |||||||
> But you do need some kind of base knowledge, if you want to talk about this. Otherwise you're saying "what if we create God". And last time I checked it wasn't possible. The base knowledge is game theory, not quite the same focus as the maths used to build an AI. And the problem isn't limited to "build god" — hence my examples of cobra effect, in which humans bred snakes because they were following the natural incentives of laws made by other humans who didn't see what would happen until it was so late that even cancelling the laws resulted in more snakes than they started with. > And what's with the existential risk obsession? That's like a bad retelling of the Pascal bet on the existence of God. And every "be careful what you wish for" story. Is climate change a potentially existential threat? Is global thermonuclear war a potentially existential threat? Are pandemics, both those from lab leaks and those evolving naturally in wet markets, potentially existential threats? The answer to all is "yes", even though these are systems with humans in the loop. (Even wet markets: people have been calling for better controls of them since well before Covid). AI is automation. Automation has bugs. If the automation has a lot of bugs, you've got humans constantly checking things, despite which errors still gets past QA from time to time. If it's perfect automation, you wouldn't have to check it… but nobody knows how to do perfect automation. "Perfect" automation would be god-like, but just as humans keep mistaking natural phenomena for deities, an AI doesn't have to actually be perfect for humans to set it running without checking the output and then be surprised when it all goes wrong. A decade ago the mistakes were companies doing blind dictionary merges on "Keep Calm and …" T-shirts, today it's LLMs giving legal advice (and perhaps writing US trade plans). They (the humans) shouldn't be doing those things, but they do them anyway, because humans are like that. | ||||||||
| ||||||||