| ▲ | ben_w 13 hours ago | |
> The danger is an AI that decides to re-perpetrate the class division that our existing system does. Or the people in charge use it for that. Given human political cycles, every generation or so there's some attempt to demonise a minority or three, and every so often it goes from "demonise" to "genocide". In principle, AI have plenty of other ways to go wrong besides the human part. No idea how long it would take for them to be competent enough for traditional "doom" scenarios, but the practical reality we can already witness is chronic human laziness: just as "vibe coding" was coined as "don't even bother looking at what the AI does just accept it", there's going to be similar trends in every other domain. What this means for personalised recommendations, I don't know for sure, but suspect it'll look half way between a cult and taking horrorscopes and fashion guides too seriously. | ||
| ▲ | gnarlouse 12 hours ago | parent [-] | |
Fully agree with you, and it was sort of a miscommunication on my part to say "AI that decides" when I really mean to say "an AI model baked with malice/negligence by malicious/negligent creators." | ||