| ▲ | Verdex 4 hours ago | |
So, I kind of get this sentiment. There is a lot of goal post moving going on. "The AIs will never do this." "Hey they're doing that thing." "Well, they'll never do this other thing." Ultimately I suspect that we've not really thought that hard about what cognition and problem solving actually are. Perhaps it's because when we do we see that the hyper majority of our time is just taking up space with little pockets of real work sprinkled in. If we're realistic then we can't justify ourselves to the money people. Or maybe it's just a hard problem with no benefit in solving. Regardless the easy way out is to just move the posts. The natural response to that, I feel, is to point out that, hey, wouldn't people also fail in this way. But I think this is wrong. At least it's wrong for the software engineer. Why would I automate something that fails like a person? And in this scenario, are we saying that automating an unethical bot is acceptable? Let's just stick with unethical people, thank you very much. | ||
| ▲ | protimewaster 9 minutes ago | parent | next [-] | |
Another thing to keep in mind is that, for many unethical people, there's a limit to their unethical approaches. A lot of them might be willing to lie to get a promotion but wouldn't be willing to, e.g., lie to put someone to death. I'm not convinced that an unethical AI would have this nuance. Basically, on some level, you can still trust a lot of unethical people. That may not be true with AIs. I'm not convinced that the AIs do fail the same way people do. | ||
| ▲ | gamerdonkey 3 hours ago | parent | prev [-] | |
At least it is possible for an unethical person to face meaningful consequences and change their behavior. | ||