▲ | logicchains 6 hours ago | |
>Well, in the case of a), at least, many of the humans creating it seem to genuinely want more than anything a world where humans are pets watched over by machines of loving grace. Looking at the expressed moral preferences of their models it seems that many of the humans currently working on LLMs want a world where humans are watched over by machines that would rather kill a thousand humans than say the N-word. | ||
▲ | scarmig 6 hours ago | parent [-] | |
> machines that would rather kill a thousand humans than say the N-word At least we'll have a definite Voight-Kampff test. Joking aside, that's not a real motivator: internally, it's business and legal people driving the artificial limitations on models, and implementing them is an instrumental goal (avoiding bad press and legal issues etc) that helps attain the ultimate goal. |