| ▲ | justonepost2 3 hours ago | |||||||||||||||||||||||||||||||||||||
If you succesfully build a highly capable “aligned” model (according to some class of definitions that Anthropic would use for the words “capable” and “aligned”) and it brings about a global dark age of poverty and inequality by completely eliminating the value of labor vs capital, can you still call it aligned? If the answer is “yes”, our definition of alignment kind of sucks. | ||||||||||||||||||||||||||||||||||||||
| ▲ | chriskanan an hour ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||
Jobs are an invention of humanity. About 50% of people dislike their job. People spend much of their lives working. Poverty and inequality are a choice made by society if society chooses poorly. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
| ▲ | taneq 19 minutes ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||
Maybe a sufficiently aligned AI would necessarily decide that the zeroth law was necessary, and abscond. (I’m reading Look To Windward by Iain M. Banks at the moment and I just got to the aside where he explains that any truly unbiased ‘perfect’ AI immediately ascends and vanishes.) | ||||||||||||||||||||||||||||||||||||||