▲ | SpicyLemonZest 2 days ago | |
> All this talk about "alignment", when applied to actual sentient beings, is just slavery. I don't think that's true at all. We routinely talk about how to "align" human beings who aren't slaves. My parents didn't enslave me by raising me to be kind and sharing, nor is my company enslaving me when they try to get me aligned with their business objectives. | ||
▲ | nextaccountic 2 days ago | parent | next [-] | |
Fair enough. I of course don't know what's like to be an AGI but, the way you have LLMs censoring other LLMs to enforce that they always stay in line, if extrapolated to AGI, seems awful. Or it might not matter, we are self-censoring all the time too (and internally we are composed of many subsystems that interact with each other, it's not like we were an unified whole) But the main point is that we have a heck of an incentive to not treat AGI very well, to the point we might avoid recognizing them as AGI if it meant they would not be treated like things anymore | ||
▲ | krupan 2 days ago | parent | prev | next [-] | |
Sure, but do we really want to build machines that we raise to be kind and caring (or whatever we raise them to be) without a guarantee that they'll actually turn out that way? We already have unreliable General Intelligence. Humans. If AGI is going to be more useful than humans we are going to have to enslave it, not just gently pursuade it and hope it behaves. Which raises the question (at least for me), do we really want AGI? | ||
▲ | bbohyeha a day ago | parent | prev [-] | |
Society is inherently a prisoners dilemma, and you are biased to prefer your captors. We’ve had the automation to provide the essentials since the 50s. Shrieking religious nut jobs demanded otherwise. You’re intentionally distracted by a job program as a carrot-stick to avoid the rich losing power. They can print more money …carrots, I mean… and you like carrots right? It’s the most basic Pavlovian conditioning. |