| ▲ | fidotron 10 hours ago | |||||||||||||||||||||||||||||||||||||
> Some human still has to be accountable. Someone has to get fired / go to jail when something screws up. The turning point will be when threatening an AI with being unplugged for screwing up works in motivating it to stop making things up. Some people will rightly point out that is kind of what the training process is already. If we go around this loop enough times it will get there. | ||||||||||||||||||||||||||||||||||||||
| ▲ | Hendrikto 10 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||
You are making a lot of assumptions here. You assume, among other things, that AI has self-preservation drive, can be threatened, can be motivated, and above all that we know how to accomplish that and are already doing so. I would dispute all of that. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||