Remix.run Logo
jibal 5 hours ago

The usual (e.g., https://www.reddit.com/r/philosophy/comments/j4xo8e/the_univ...) bunch of logical fallacies and unexamined assumptions from Bostrom.

Good philosophers focus on asking piercing questions, not on proposing policy.

> Would it not be wildly irresponsible, [Yudkowsky and Soares] ask, to expose our entire species to even a 1-in-10 chance of annihilation?

Yes, if that number is anywhere near reality, of which there is considerable doubt.

> However, sound policy analysis must weigh potential benefits alongside the risks of any emerging technology.

Must it? Or is this a deflection from concern about immense risk?

> One could equally maintain that if nobody builds it, everyone dies.

Everyone is going to die in any case, so this a red herring that misframes the issues.

> The rest of us are on course to follow within a few short decades. For many individuals—such as the elderly and the gravely ill—the end is much closer. Part of the promise of superintelligence is that it might fundamentally change this condition.

"might", if one accepts numerous dubious and poorly reasoned arguments. I don't.

> In particular, sufficiently advanced AI could remove or reduce many other risks to our survival, both as individuals and as a civilization.

"could" ... but it won't; certainly not for me as an individual of advanced age, and almost certainly not for "civilization", whatever that means.

> Superintelligence would be able to enormously accelerate advances in biology and medicine—devising cures for all diseases

There are numerous unstated assumptions here ... notably an assumption that all diseases are "curable", whatever exactly that means--the "cure" might require a brain transplant, for instance.

> and developing powerful anti-aging and rejuvenation therapies to restore the weak and sick to full youthful vigor.

Again, this just assumes that such things are feasible, as if an ASI is a genie or a magic wand. Not everything that can be conceived of is technologically possible. It's like saying that with an ASI we could find the largest prime or solve the halting problem.

> These scenarios become realistic and imminent with superintelligence guiding our science.

So he baselessly claims.

Sorry, but this is all apologetics, not an intellectually honest search for truth.

logicchains 3 hours ago | parent | next [-]

The author fundamentally doesn't understand complexity theory. So many processes in our universe are chaotic in the formal sense, requiring exponentially more compute to simulate a linear amount of extra time into the future. No amount of poorly defined "intelligence" can get around the fact that such things would take more compute than is available in the entire universe to simulate a few seconds ahead. An AI would hence need to make scientific experiments to obtain information just as humans do, many of which have an unavoidable time component (cannot be sped up), so there's no way an AI could just suddenly cure all diseases no matter how "intelligent" it was. These singularity types are basically medieval woo merchants trying to tell you convince you that it's possible to magically sort an arbitrary array in O(1) time.

mrob 2 hours ago | parent [-]

Consider weather prediction. Fluid dynamics are chaotic, so that's a good example of something where no amount of compute is sufficient in the general case. An ASI, not being dumb, will of course immediately recognize this, and realize it is has to solve for the degenerate case. It therefore implements the much easier sub-goal of removing the atmosphere. Humans will naturally object to this if they find out, so it logically proceeds with the sub-sub-goal of killing all humans. What's the weather next month? Just a moment, releasing autonomous murder drone swarm...

slackr 4 hours ago | parent | prev [-]

Spot on.

Frankly, I’m unsure if it’s meant to be satire.