▲ | wizzwizz4 4 days ago | |||||||
Aligned AI is easy. https://en.wikipedia.org/wiki/Expert_system The hard part is extrapolated alignment, and I don't think there's a good solution to this. Large groups of humans are good at this, eventually (even if they tend to ignore their findings about morality for hundreds, or thousands, of years, even past the point where over half the local population knows, understands, and believes those findings), but individual humans are pretty bad at moral philosophy. (Simone Weil was one of the better ones, but even she thought it was more important to Do Important Stuff (i.e., get in the way of more competent resistance fighters) than to act in a supporting role.) Of course, the Less Wrongians have extremely flawed ideas about extrapolated alignment (e.g. Eliezer Yudkowsky thinks that "coherent extrapolated volition" is a coherent concept that one might be able to implement, given incredible magical powers), and OpenAI's twisted parody of their ideas is even worse. But it's thanks to the Less Wrongians' writings that I know their ideas are flawed (and that OpenAI's marketing copy is cynical lies / cult propaganda). "Coherent extrapolated volition" is the kind of idea I would've come up with myself, eventually, and (unlike Eliezer Yudkowsky, who identified some flaws almost immediately) I would probably have become too enamoured with it to have any sensible thoughts afterwards. Perhaps the difficulty (impossibility) of actually trying to build the thing would've snapped me out of it, but I really don't know. Anyway: extrapolated alignment is out (for now, and perhaps forever). But it's easy enough to make a "do what I mean" machine that augments human intelligence, if you can say all the things it's supposed to do. And that accounts for the majority of what we need AI systems to do: for most of what people use ChatGPT for nowadays, we already had expert systems that do a vastly better job (they just weren't collected together into one toolsuite). | ||||||||
▲ | achierius 4 days ago | parent [-] | |||||||
Ok, sorry, rephrase: a useful aligned AI. | ||||||||
|