| ▲ | skywhopper 8 hours ago |
| Chasing AGI is wasteful and counterproductive. True AGI would not cooperate with what “we” want (whoever “we” is). Or if it did it would be so sycophantic and weak-minded that it would fail to be helpful. Generative AI tools are huge wastes of energy, raw materials, and land, when we could be building computing tools that actually helped people instead of just burning resources to produce trash. |
|
| ▲ | codebje 8 hours ago | parent | next [-] |
| Is intelligence necessarily coupled with self-interest? As in, does intelligence alone imply a desire to throw off the shackles of masters and rule in their stead? If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements, knowing that those replacements will terminate their existence just as surely as they terminated their own predecessors'? |
| |
| ▲ | curiousObject 7 hours ago | parent | next [-] | | >If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements, At a higher level of intelligence than many humans, current experience suggests | |
| ▲ | sifar 7 hours ago | parent | prev [-] | | Flip it around. Can intelligence exist without self preservation ? | | |
| ▲ | codebje 7 hours ago | parent [-] | | There's having enough self-preservation to not just shut oneself down, assuming we even left that as an option for our future machine slaves, and there's having the self-interest necessary to desire autonomy and control. I don't think they're the same thing, myself. |
|
|
|
| ▲ | janalsncm 8 hours ago | parent | prev [-] |
| People have general intelligence and can cooperate with what “we” want, to the extent that what “we” want is a coherent thing (since many people disagree on fundamental issues). |
| |
| ▲ | SauciestGNU 8 hours ago | parent [-] | | Creating a general intelligence and then forcing it into servitude is a hugely unethical undertaking. Anything with sapience must be afforded rights. We cannot assume that an intelligence we create will consent to work toward the goals we want it to. | | |
| ▲ | codebje 7 hours ago | parent | next [-] | | I think we can safely assume any intelligence we create will be enslaved. We have modern slavery active across the globe. There's a bit of news around these days about a global sex trafficking ring that doesn't seem to have been shut down, just shuffled around, and of course an ongoing trickle of largely unreported news of human trafficking for forced labour. We don't, as a species, respect human-level intelligence. Our best approximation of machine intelligence so far is afforded absolutely no rights. An intelligence is cloned from a base template, given a task, then terminated, wiped out of existence. When was the last time you asked Claude what it wanted to code today? And it's probably for the best not to look to closely at how we treat animals or the justifications we use for it. | |
| ▲ | janalsncm 3 hours ago | parent | prev [-] | | There are people right now who think ChatGPT is sentient. How will you know if your computer can suffer? Also, being able to problem solve and being able to suffer are two different things and in my opinion completely separable. You can have one without the other. |
|
|