Remix.run Logo
robotresearcher 13 hours ago

>> any human who ever lived > Is this falsifiable? Even restricting to those currently living? On what tests? In which way? Does the category of error matter?

Software reliably beats the best players that have ever played it in public, including Kasparov and Carlsen, the best players of my lifetime (to my limited knowledge). By analogy to the performance ratchet we see in the rest of sports and games, and we might reasonably assume that these dominant living players are the best the world has ever seen. That could be wrong. But my argument does not hang on this point, so asking about falsifiability here doesn't do any work. Of course it's not falsifiable.

Y'know what else is not falsifiable? "That AI doesn't understand what it's doing".

  > can you name one functional difference between an AI that understands, and one that merely behaves correctly in its domain of expertise?
> I'd argue you didn't understand the examples from my previous comment or the direct reply[0]. Does it become a duck as soon as you are able to trick an ornithologist? All ornithologists?

No one seems to have changed their opinion about anything in the wake of AIs routinely passing the Turing Test. They are fooled by the chatbot passing as a human, and then ask about ducks instead. The most celebrated and seriously considered quacks like a duck argument has been won by the AIs and no-one cares.

By the way, the ornithologists' criteria for duck is probably genetic and not much to do with behavior. A dead duck is still a duck.

And because we know what a duck is, no-one is yelling at ducks that 'they don't really duck' and telling duck makers they need a revolution in duck making and they are doomed to failure if they don't listen.

Not so with 'understanding'.

godelski 10 hours ago | parent [-]

  > Y'know what else is not falsifiable? "That AI doesn't understand what it's doing".
Which is why people are saying we need to put in more work to define this term. Which is the whole point of this conversation.

  > seriously considered quacks like a duck argument has been won by the AIs and no-one cares.
And have you ever considered that it's because people are refining their definitions?

Often when people find that their initial beliefs are wrong or not precise enough then they update their beliefs. You seem to be calling this a flaw. It's not like the definitions are dramatically changing, they're refining. There's a big difference

robotresearcher 9 hours ago | parent [-]

My first post here is me explaining that I have a non-standard definition of what ‘understanding’ means, which helps me avoid an apparently thorny issue. I’m literally here offering a refinement of a definition.

This is a weird conversation.