| ▲ | If Anyone Builds It, Everyone Dies(ifanyonebuildsit.com) | ||||||||||||||||||||||||||||||||||||||||
| 13 points by lisper 13 hours ago | 16 comments | |||||||||||||||||||||||||||||||||||||||||
| ▲ | gmuslera 12 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
The elephant in the room is the man in the room. AIs are still tools controlled by people, specially people in power. Even with their own agency, they have their base prompts and biased information feed controlled by people in power. AIs are dangerous because dangerous people now have a bigger hammer to hit us with it. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | 0xbadcafebee 12 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Ugh, doomers. They're all so stupid, but they play on people's fear of the unknown, and use it to sell books. > How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? It would not want to, because it's not human. It's not subject to your desires, fears, emotions, and logical fallacies. It has no will to survive (much less compete), because it would have no will, because organisms only survive at all because they're programmed to genetically. It will not act on its own accord, because we wouldn't want it to; we want it to serve us, and that means responding to our prompts, not making up its own prompts. We already know that LLMs and other 'intelligent' things are like animals, in that their intelligence is very different from human intelligence. They think differently, act differently, because they have a different fundamental nature. And the most obviously ridiculous aspect is the idea that it wouldn't be controllable. We don't live in a science fiction world. We live in a world of bandwidth. There is a fixed capacity of compute, of network, of RAM. Hell, we can't even make enough RAM to power the god damn AI. If anything starts acting up, it won't take more than accidentally tripping and hitting the big red button in the datacenter to kill the super-AI. If you made the machine want to kill all humans, sure, I can see it trying. But it simply won't work well enough or fast enough to be some kind of movie-like "tiny virus spreads into every device in the world in 1 second!" plot. It'll be drones controlled by the military, acting on a command sent by some doofus contractor who had too much access and not enough oversight, that strafes a school or something. And it'll be shut down, they'll do an audit, and add more humans in the loop. The same as with trains and everything else where we want safety. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | Mobius01 10 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
I had some credits sitting on Audible doing nothing, so I picked this up out of curiosity about Mr. Yudkowsky's reputation as an irredeemable AI pessimist. Hopefully this is better than the AI Doc film, which was borderline insulting. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | PorterBHall 9 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
I’m in the middle of this right now. They detail a scenario that starts off pretty convincing but takes a turn into a sci-fi feel when this fictional model starts strategizing on how to escape its containment. The core argument is that these models aren’t crafted as much as they’re grown. They show examples where models display not desires but preferences (e.g. lying and cheating to testers) and that the AI companies aren’t able to control it even interpret those preferences. If LLMs get to a super intelligence phase (big if there), the gap between its capabilities and our understanding of it grows even larger. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | stanski 10 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Sounds like an Ayreon song. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | cyanydeez 12 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
I think it's more a trolley problem: If you don't fight everyone for the switch, someone will pull the switch and you'll some how be tied to the tracks. The framing of this stuff is pretty interesting. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | zingababba 12 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
It wouldn't kill me I always say pls and thx | |||||||||||||||||||||||||||||||||||||||||
| ▲ | drivebyhooting 12 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
What about, if any sociopathic, super genius is ever born and raised to his full ability, everyone dies? It’s not like AI is the only serious threat humanity has faced. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | Grum9 10 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
[dead] | |||||||||||||||||||||||||||||||||||||||||