Remix.run Logo
CrossVR 6 days ago

Now you're getting to the heart of the thought experiment. Because does it really understand the code or subtext, or is it just really good at fooling us that it does?

When it makes a mistake, did it just have a too limited understanding or did it simply not get lucky with its prediction of the next word? Is there even a difference between the two?

I would like to agree with you that there's no special "causal power" that Turing machines can't emulate. But I remain skeptical, not out of chauvinism, but out of caution. Because I think it's dangerous to assume an AI understands a problem simply because it said the right words.