▲ | blamestross a day ago | |||||||||||||
I'm kind of upset to see systematically "Alignment" and "AI Safety" co-opted for "undesirable business outcomes". These are existential problems, not mild profit blockers. Its almost like the goals of humanity and these companies are misaligned. | ||||||||||||||
▲ | godelski a day ago | parent | next [-] | |||||||||||||
Certainly. I'd say that we've created a lot of Lemon Markets, if not an entire Lemon Economy[0]. The Lemon Market is literally an alignment problem, resultant from asymmetric information. Clearly the intent of the economy (via our social contract) is that we allocate money towards things that provide "value". Where I think we generally interpret that word to mean bettering peoples' lives in some form or another. But it is also clear that the term takes on other definitions and isn't perfectly aligned with making us better. Certainly our metrics can be hacked, as in the case of Lemon Markets.A well functioning market has competition that not only drives down prices but increases quality of products. Obviously customers want to simultaneously maximize quality and minimize price. But when customers cannot differentiate quality, they can only minimize price. Leading to the feedback loop, where producers are in a race to the bottom, making sacrifices to quality in favor of driving down prices (and thus driving up profits). Not because this is actually the thing that customers want! But because the market is inefficient. I think critical to these alignment issues is that they're not primarily driven by people trying to be malicious nor deceptive. They are more often driven by being short sighted and overlooking subtle nuances. They don't happen all at once, but instead slowly creep, making them more difficult to detect. It's like good horror: you might know something is wrong, but by the time you put it all together you're dead. It isn't because anyone is dumb or doing anything evil, but because maintaining alignment is difficult and mistakes are easy. | ||||||||||||||
▲ | tbrownaw 21 hours ago | parent | prev | next [-] | |||||||||||||
No, we are not on track to create a literal god in the machine. Skynet isn't actually real. LLM system do not have intent in the way that is presupposed by these worries. This is all much much less of an existential threat than, say, nuclear-armed countries getting into military conflicts, or overworked grad students having lab accidents with pathogen research. Maybe it's as dangerous as the printing press and the wars that that caused? | ||||||||||||||
| ||||||||||||||
▲ | hamburga a day ago | parent | prev [-] | |||||||||||||
Agreed. I see this more and more as the AI safety discourse spills more into the general lexicon and into PR efforts. For example, the “sycophantic” GPT 4o was also described as “misaligned” as code for “unlikable.” In the meme, I filed this under “personality programming.” Very different from the kinds of problems the original AI alignment writers were focused on. |