▲ | ben_w 10 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||
> are more or less Ai doomers with no actual background in machine learning/ai I don't think why we should listen to them. Weather vs. climate. The question they're asking isn't about machine learning specifically, it's about the risks of generic optimisers optimising a utility function, and the difficulty of specifying a utility function in a way that doesn't have unfortunate side effects. The examples they give also work with biology (genetics and the difference between what your genes "want" and what your brain "wants") and with governance (laws and loopholes, cobra effects, etc.). This is why a lot (I don't want to say "majority") of people who do have an actual background in machine learning and AI, pay attention to doomer arguments. Some of them* may be business leaders using the same language to BS their way into regulatory capture, but my experience of "real" AI researchers is they're mostly also "safety is important, Yudkowsky makes good points about XYZ" even if they would also say "my P(doom) is only 10%, not 95% like Yudkowsky". * I'm mainly thinking of Musk here, thanks to him saying "AI is summoning the demon" while also having an AI car company, funding OpenAI in the early years and now being in a legal spat with it that looks like it's "hostile takeover or interfere to the same end", funding another AI company, building humanoid robots and showing off ridiculous compute hardware, having brain implant chips, etc. | ||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | amarcheschi 10 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
>The question they're asking isn't about machine learning specifically, it's about the risks of generic optimisers optimising a utility function, and the difficulty of specifying a utility function in a way that doesn't have unfortunate side effects. The examples they give also work with biology (genetics and the difference between what your genes "want" and what your brain "wants") and with governance (laws and loopholes, cobra effects, etc.). But you do need some kind of base knowledge, if you want to talk about this. Otherwise you're saying "what if we create God". And last time I checked it wasn't possible. And what's with the existential risk obsession? That's like a bad retelling of the Pascal bet on the existence of God. I'm relieved that at least in italy I still have to find someone in Ai taking them into consideration for more than a few minutes during an ethics course (with students sneering at the ideas of bostrom possible futures), and again, it's held by a professor with no technical knowledge with whom i often disagree due to this | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | fc417fc802 9 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
> Some of them* may be business leaders using the same language to BS their way into regulatory capture Realistically, probably yeah. On the other hand, if you manage to occupy the high ground then you might be able to protect yourself. P( doom ) seems quite murky to me because conquering the real world involves physical hardware. We've had billions of general intelligences crawling all over the world waging war with one another for a while now. I doubt every single AGI magically ends up aligned in a common bloc against humanity; all the alternatives to that are hopelessly opaque. The worst case scenario that seems reasonably likely to me is probably AGI collectively not caring about us and wanting some natural resources that we happen to be living on top of. | ||||||||||||||||||||||||||||||||||||||||||||||||||
|