▲ | Retric 2 days ago | |||||||||||||||||||||||||||||||||||||||||||
“ability to understand” Isn’t just the ability to preform a task. One of the issues with current AI training is it’s really terrible at discovering which aspects of the training data are false and should be ignored. That requires all kinds of mental tasks to be constantly active including evaluating emotional context to figure out if someone is being deceptive etc. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | AstroBen 2 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||
> Isn’t just the ability to preform a task. Right. In this case I'd say it's the ability to interpret data and use it to succeed at whatever goals it has Evaluating emotional context would be similar to a chess engine calculating its next move. There's nothing there that implies a conscience, sentience, morals, feelings, suffering or anything 'human'. It's just a necessary intermediate function to achieve its goal Rob miles has some really good videos on AI safety research which touches on how AGI would think. Thats shaped a lot of how I think about it https://www.youtube.com/watch?v=hEUO6pjwFOo | ||||||||||||||||||||||||||||||||||||||||||||
|