| ▲ | bko 5 hours ago | |||||||
The AI is real. The "alignment" research that's leading the top AI companies to call for strict regulation is not real. Maybe the people working on it believe it real, but I'm hard-pressed to think that there aren't ulterior motives at play. You mean the 100 billion dollar company of an increasingly commoditized product offering has no interest in putting up barriers that prevent smaller competitors? | ||||||||
| ▲ | pjm331 4 hours ago | parent | next [-] | |||||||
The sci fi version of the alignment problem is about AI agents having their own motives The real world alignment problem is humans using AI to do bad stuff The latter problem is very real | ||||||||
| ||||||||
| ▲ | pixl97 an hour ago | parent | prev | next [-] | |||||||
This is tantamount to saying your government only allows itself to have nukes because it wants to maintain power. And it's true, the more entities that have nukes the less potential power that government has. At the same time everybody should want less nukes because they are wildly fucking dangerous and a potential terminal scenario for humankind. | ||||||||
| ▲ | 3 hours ago | parent | prev | next [-] | |||||||
| [deleted] | ||||||||
| ▲ | daveguy 5 hours ago | parent | prev [-] | |||||||
Just because tech oligarchs are coopting "alignment" for regulatory capture doesn't mean it's not a real research area and important topic in AI. When we are using natural language with AI, ambiguity is implied. When you have ambiguity, it's important an AI doesn't just calculate that the best way to get to a goal is through morally abhorrent means. Or at the very least, action on that calculation will require human approval so that someone has to take legal responsibility for the decision. | ||||||||