| ▲ | Hendrikto 10 hours ago | ||||||||||||||||||||||||||||
You are making a lot of assumptions here. You assume, among other things, that AI has self-preservation drive, can be threatened, can be motivated, and above all that we know how to accomplish that and are already doing so. I would dispute all of that. | |||||||||||||||||||||||||||||
| ▲ | yes_man 10 hours ago | parent [-] | ||||||||||||||||||||||||||||
For now maybe not. (Maybe). But just as evolution in nature, isn’t it likely that in the future the AIs that have a preservation drive are the ones that survive and proliferate? Seeing they optimize for their survival and proliferation, and not blindly what they were trained on. I am not discounting this happening already, not by the LLMs necessarily being sentient but at least being intelligent enough to emulate sentience. It’s just that for now, humanity is in control of what AI models are being deployed. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||