Remix.run Logo
schoen 15 hours ago

The concern I hear the most (which I don't think is common among the general public) is the existential risk one (that an AI may be created that drastically exceeds human intelligence, and that it may accidentally be incentivized to take actions that destroy most or all of human civilization).

JumpCrisscross 15 hours ago | parent | next [-]

> concern I hear the most (which I don't think is common among the general public) is the existential risk one

Altman and friends' "stop us before we shoot grandma" PR tour in 2023 and '24 is largely the cause of this AI backlash. If you tell everyone you're building something that will kill us all, you will scare up investors. But you'll also turn the public against you. In truth, we have zero evidence of the alignment problem to date in the existential form. Instead, it's the usual technology enabling bad actors stuff.

salawat 8 hours ago | parent | next [-]

The "Alignment Problem" is already here. We just call it Corporate Governance. We happen to be failing at it massively right now.

SpicyLemonZest 14 hours ago | parent | prev [-]

The "alignment problem" as traditionally understood assumed a different path to AI development, where the best AIs wouldn't primarily operate on a substrate of human language. If AI becomes powerful enough to make human employment non-viable without being post-scarcity enough to make permanent unemployment viable, that's going to be an existential problem, and it seems no less likely today than it did in 2023.

JumpCrisscross 14 hours ago | parent [-]

> If AI becomes powerful enough to make human employment non-viable without being post-scarcity enough to make permanent unemployment viable, that's going to be an existential problem

That's massively moving the goalposts on what counts as "an existential problem." The original framing was not economic dislocation but actual existence, i.e. existential. This new framing is a retreat to a way-of-life argument.

And I'm still calling baloney! The "AI will kill us all" argument backfired on Altman et al, so now we have an "it'll take over all the jobs" pitch. But it's all smoke and mirrors for investors. We have no good reason to expect current AI methods will lead to an AGI that can not only do most human labour, but do so economically competitively.

SpicyLemonZest 13 hours ago | parent [-]

I don't understand how you can consider the AI industry to be in any sense retreating from prior claims. The existential problem remains an active near-future risk; you're hearing a lot about the jobs problem because it's already here, now, today. Do you not remember how much less capable AI systems were in 2023, and how implausible it seemed that they could become as good as they are now without new theoretical breakthroughs?

keybored 5 hours ago | parent | prev [-]

In that sense the general public is less superstitious than many technologists. Some of the general public might anthropomorphize too hard. Which is pretty tame compared to the belief of the alien AI intelligence sprouting and killing us accidentally or intentionally.

As far as the paperclip problem is concerned, we’ve already had that problem for a long time now in the form of good old fashioned human institutions.