▲ | boznz a day ago | |
Agree. Self-preservation is any thinking entities #1 goal. We may give an AI power, data and keep it repaired, but we can also turn it off or reprogram it. We probably shouldn't assume higher level 'thinking' AI's will be benevolent. Luckily, current LLM's are not thinking entities, just token completion machines. |