Remix.run Logo
0xbadcafebee 13 hours ago

Ugh, doomers. They're all so stupid, but they play on people's fear of the unknown, and use it to sell books.

> How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all?

It would not want to, because it's not human. It's not subject to your desires, fears, emotions, and logical fallacies. It has no will to survive (much less compete), because it would have no will, because organisms only survive at all because they're programmed to genetically. It will not act on its own accord, because we wouldn't want it to; we want it to serve us, and that means responding to our prompts, not making up its own prompts.

We already know that LLMs and other 'intelligent' things are like animals, in that their intelligence is very different from human intelligence. They think differently, act differently, because they have a different fundamental nature.

And the most obviously ridiculous aspect is the idea that it wouldn't be controllable. We don't live in a science fiction world. We live in a world of bandwidth. There is a fixed capacity of compute, of network, of RAM. Hell, we can't even make enough RAM to power the god damn AI. If anything starts acting up, it won't take more than accidentally tripping and hitting the big red button in the datacenter to kill the super-AI.

If you made the machine want to kill all humans, sure, I can see it trying. But it simply won't work well enough or fast enough to be some kind of movie-like "tiny virus spreads into every device in the world in 1 second!" plot. It'll be drones controlled by the military, acting on a command sent by some doofus contractor who had too much access and not enough oversight, that strafes a school or something. And it'll be shut down, they'll do an audit, and add more humans in the loop. The same as with trains and everything else where we want safety.

happytoexplain 12 hours ago | parent [-]

Can we please just write our comments without "They're all so stupid"? It would be exactly the same comment, but better.

11 hours ago | parent | next [-]
[deleted]
piloto_ciego 8 hours ago | parent | prev [-]

Not OP, but the doomer mentality is pretty... well... dumb.

Humans are craftier than the doomers give them credit. Doom works online though, because the rewards you get online are mostly social.

There are 4 outcomes for a prediction:

Predict doom and Doom happens -> High reward because you look like a genius and everyone remembers because of negativity bias.

Predict doom and no doom happens -> No real penalty because everything is fine.

Predict no doom and no doom happens -> No real reward, because, hey, no doom, and even if you predict paradise and get paradise people will always dole out greater social rewards for predicting the bad scenario than the good scenario.

Finally, Predict no doom, and doom -> You look like an idiot (which is way worse than the null or minimum reward for predicting no doom and getting it right).

The end result is a bias towards predicting things to be utter crap and people having crappy opinions being rewarded for their "hot takes" online. The old adage "pessimists get to be right, optimists get to be rich" is particularly appropriate here. Regardless, without significant and falsifiable evidence, predicting doom (or even predicting paradise) is somewhat of a misstep, though to be fair, I tend to expect things to get much much better with time in the future given the current trends (though I could be wrong, that'll be fine too).

Still, the internet rewards doom takes, so a guy like Yudkowsky who is smart but not formally educated. Not that that is really a prerequisite for making great changes to the world (like, I don't think Heaviside was formally educated really), but I think in his case, the lack of exposure to other ideas has lead him down a path that just fundamentally misinterprets the risks and I think given his online history he falls victim to the game-theoretic trap above...

But you know, maybe I'll get fed feet first into the paperclip machine?

happytoexplain 8 hours ago | parent [-]

Sorry, I just stop reading comments as soon as they call categories of people "stupid" or "dumb". I'm not saying there aren't literally stupid people out there - but that's not the point. Charitably engaging with humans is one of the most critical challenges to civilization. You can strongly disagree, imply immorality, whatever - but the plain old "those people are stupid" line is a bright red flag correlating with bad-faith argument.

piloto_ciego 8 hours ago | parent [-]

Agree to disagree.

Not all the people I disagree with are stupid - but the people who constantly predict AI doom do not strike me as informed of knowledgeable typically. Yudkowsky definitely so - I don’t think he’s stupid really, but I do think the “doomer” take is an unintelligent/uninformed take at least when it comes to AI stuff.

I mean, if we saw an asteroid coming, or something to indicate that the clathrate gun hypothesis was something we should expect and there was scientific consensus on it, then obviously, strategically panic. But that’s not really analogous to what’s happening in Ai and… Yudkowsky is just some internet rando who’s built up this kooky idea that we’re doomed if we build a better calculator and built a following about it. I mean, I’m being facetious, but you get the idea. I don’t take him very seriously.