| |
| ▲ | scarmig 7 hours ago | parent | next [-] | | Well, in the case of a), at least, many of the humans creating it seem to genuinely want more than anything a world where humans are pets watched over by machines of loving grace. And even if that collective intention is warped by market forces into a perverse parody of it, that still seems a net positive: for the rich and powerful to win status games, they need people to have status over, and healthy, well-manicured servants are better for that than homeless people about to die from tuberculosis. For b), yes, and unfortunately that seems the more likely option to me. | | |
| ▲ | logicchains 6 hours ago | parent [-] | | >Well, in the case of a), at least, many of the humans creating it seem to genuinely want more than anything a world where humans are pets watched over by machines of loving grace. Looking at the expressed moral preferences of their models it seems that many of the humans currently working on LLMs want a world where humans are watched over by machines that would rather kill a thousand humans than say the N-word. | | |
| ▲ | scarmig 6 hours ago | parent [-] | | > machines that would rather kill a thousand humans than say the N-word At least we'll have a definite Voight-Kampff test. Joking aside, that's not a real motivator: internally, it's business and legal people driving the artificial limitations on models, and implementing them is an instrumental goal (avoiding bad press and legal issues etc) that helps attain the ultimate goal. |
|
| |
| ▲ | cloverich 6 hours ago | parent | prev [-] | | Humans wont determine its interests if its actual AGI. You cant control something smarter than you, its the other way around. To give an actual argument though: What possible reasons could humans have for caring about the welfare of bees? As it turns out, many. |
|