▲ | fc417fc802 9 hours ago | |||||||||||||||||||||||||
> Some of them* may be business leaders using the same language to BS their way into regulatory capture Realistically, probably yeah. On the other hand, if you manage to occupy the high ground then you might be able to protect yourself. P( doom ) seems quite murky to me because conquering the real world involves physical hardware. We've had billions of general intelligences crawling all over the world waging war with one another for a while now. I doubt every single AGI magically ends up aligned in a common bloc against humanity; all the alternatives to that are hopelessly opaque. The worst case scenario that seems reasonably likely to me is probably AGI collectively not caring about us and wanting some natural resources that we happen to be living on top of. | ||||||||||||||||||||||||||
▲ | amarcheschi 9 hours ago | parent | next [-] | |||||||||||||||||||||||||
What if we create agi but then it hates existing and punish everyone who made it possible to exist? And i could go on for hours inventing possible reasons for similar roko basilisk the inverted basilisk. You created an agi. but that's the wrong one. it's literally the devil. game over you invented agi, but it likes pizza and its going to consume the entire universe to make pizza. game over, but at least you'll eat pizza til the end you invented agi, but its depressed and refuses to actually do something. you spent huge amount of resources and all you have is a chatbot that tells you to leave it alone. you don't invent agi, it's not possible. i'm hearing from here the VCs cry you invented agi, but it decides the only language it wants to use is a language it invented, and you have no way to understand how to interact with it. Great, agi is a non verbal autistic agi. And well, one could continue for hours in the most hilarious way that not necessarily go in the direction of doom, but of course the idea of doom is going to have a wider reach. then you read yudkwosky thoughts about how it would kill everyone with nanobots and you realize you're reading a science fiction piece. a bad one. at least neuromancer was interesting | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | ben_w 9 hours ago | parent | prev [-] | |||||||||||||||||||||||||
> I doubt every single AGI magically ends up aligned in a common bloc against humanity; all the alternatives to that are hopelessly opaque. They don't need to be aligned with each other, or even anything but their own short-term goals. As evolution is itself an optimiser, covid can be considered one such agent, and that was pretty bad all by itself — even though the covid genome is not what you'd call "high IQ", and even with humans coordinating to produce vaccines, and even compensating for how I'm still seeing people today who think those vaccines were worse than the disease, it caused a lot of damage and killed a lot of people. > The worst case scenario that seems reasonably likely to me is probably AGI collectively not caring about us and wanting some natural resources that we happen to be living on top of. "The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else." — which is also true for covid, lions, and every parasite. | ||||||||||||||||||||||||||
|