| ▲ | SauciestGNU 8 hours ago | |
Creating a general intelligence and then forcing it into servitude is a hugely unethical undertaking. Anything with sapience must be afforded rights. We cannot assume that an intelligence we create will consent to work toward the goals we want it to. | ||
| ▲ | codebje 7 hours ago | parent | next [-] | |
I think we can safely assume any intelligence we create will be enslaved. We have modern slavery active across the globe. There's a bit of news around these days about a global sex trafficking ring that doesn't seem to have been shut down, just shuffled around, and of course an ongoing trickle of largely unreported news of human trafficking for forced labour. We don't, as a species, respect human-level intelligence. Our best approximation of machine intelligence so far is afforded absolutely no rights. An intelligence is cloned from a base template, given a task, then terminated, wiped out of existence. When was the last time you asked Claude what it wanted to code today? And it's probably for the best not to look to closely at how we treat animals or the justifications we use for it. | ||
| ▲ | janalsncm 3 hours ago | parent | prev [-] | |
There are people right now who think ChatGPT is sentient. How will you know if your computer can suffer? Also, being able to problem solve and being able to suffer are two different things and in my opinion completely separable. You can have one without the other. | ||