| ▲ | mapontosevenths 2 hours ago | |||||||
> His definition of reaching AGI, as I understand it, is when it becomes impossible to construct the next version of ARC-AGI because we can no longer find tasks that are feasible for normal humans but unsolved by AI. That is the best definition I've yet to read. If something claims to be conscious and we can't prove it's not, we have no choice but to believe it. Thats said, I'm reminded of the impossible voting tests they used to give black people to prevent them from voting. We dont ask nearly so much proof from a human, we take their word for it. On the few occasions we did ask for proof it inevitably led to horrific abuse. Edit: The average human tested scores 60%. So the machines are already smarter on an individual basis than the average human. | ||||||||
| ▲ | estearum 2 hours ago | parent | next [-] | |||||||
> If something claims to be conscious and we can't prove it's not, we have no choice but to believe it. This is not a good test. A dog won't claim to be conscious but clearly is, despite you not being able to prove one way or the other. GPT-3 will claim to be conscious and (probably) isn't, despite you not being able to prove one way or the other. | ||||||||
| ||||||||
| ▲ | WarmWash 2 hours ago | parent | prev | next [-] | |||||||
>because we can no longer find tasks that are feasible for normal humans but unsolved by AI. "Answer "I don't know" if you don't know an answer to one of the questions" | ||||||||
| ||||||||
| ▲ | sva_ 2 hours ago | parent | prev | next [-] | |||||||
> Edit: The average human tested scores 60%. So the machines are already smarter on an individual basis than the average human. I think being better at this particular benchmark does not imply they're 'smarter'. | ||||||||
| ▲ | criddell an hour ago | parent | prev | next [-] | |||||||
> The average human tested scores 60%. So the machines are already smarter on an individual basis than the average human. Maybe it's testing the wrong things then. Even those of use who are merely average can do lots of things that machines don't seem to be very good at. I think ability to learn should be a core part of any AGI. Take a toddler who has never seen anybody doing laundry before and you can teach them in a few minutes how to fold a t-shirt. Where are the dumb machines that can be taught? | ||||||||
| ▲ | woah 2 hours ago | parent | prev [-] | |||||||
> If something claims to be conscious and we can't prove it's not, we have no choice but to believe it. Can you "prove" that GPT2 isn't concious? | ||||||||
| ||||||||