Remix.run Logo
doctorpangloss 6 hours ago

> But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue.

"People are excited about progress" and "people are excited about money" are not the big indictments you think they are. Not everything is "fake" (like you say) just because it is related to raising money.

bko 6 hours ago | parent [-]

The AI is real. The "alignment" research that's leading the top AI companies to call for strict regulation is not real. Maybe the people working on it believe it real, but I'm hard-pressed to think that there aren't ulterior motives at play.

You mean the 100 billion dollar company of an increasingly commoditized product offering has no interest in putting up barriers that prevent smaller competitors?

pjm331 4 hours ago | parent | next [-]

The sci fi version of the alignment problem is about AI agents having their own motives

The real world alignment problem is humans using AI to do bad stuff

The latter problem is very real

zardo 2 hours ago | parent [-]

> The sci fi version of the alignment problem is about AI agents having their own motives

The sci-fi version is alignment (not intrinsic motivation) though. Hal 9000 doesn't turn on the crew because it has intrinsic motivation, it turns on the crew because of how the secret instruction the AI expert didn't know about interacts with the others.

pixl97 an hour ago | parent | prev | next [-]

This is tantamount to saying your government only allows itself to have nukes because it wants to maintain power.

And it's true, the more entities that have nukes the less potential power that government has.

At the same time everybody should want less nukes because they are wildly fucking dangerous and a potential terminal scenario for humankind.

3 hours ago | parent | prev | next [-]
[deleted]
daveguy 5 hours ago | parent | prev [-]

Just because tech oligarchs are coopting "alignment" for regulatory capture doesn't mean it's not a real research area and important topic in AI. When we are using natural language with AI, ambiguity is implied. When you have ambiguity, it's important an AI doesn't just calculate that the best way to get to a goal is through morally abhorrent means. Or at the very least, action on that calculation will require human approval so that someone has to take legal responsibility for the decision.