| ▲ | luckydata a day ago | ||||||||||||||||||||||
That would be the biggest mistake anyone could do. I hope nobody goes down this route. AI "wanting" things are an enormous risk to alignment. | |||||||||||||||||||||||
| ▲ | pixl97 a day ago | parent | next [-] | ||||||||||||||||||||||
I mean setting any neural net with a 'goal' is really just defining a want/need. You can't just encode the entire problemspace of reality, you have to give the application something to filter out. | |||||||||||||||||||||||
| ▲ | idiotsecant a day ago | parent | prev [-] | ||||||||||||||||||||||
At some point I think we'll have to face the idea that any AI more intelligent than ourselves will by definition be able to evade our alignment tricks. | |||||||||||||||||||||||
| |||||||||||||||||||||||