| ▲ | halfnhalf 6 hours ago | ||||||||||||||||||||||
Don't table tennis players learn to predict how the ball will act based on their opponents movements? Seems like if they aren't able to do that with a robot opponent (who doesn't look or behave like a human) then they wouldn't be able to play at their best. | |||||||||||||||||||||||
| ▲ | ACCount37 5 hours ago | parent | next [-] | ||||||||||||||||||||||
I do expect this to have a "novelty edge" over human opponents - which can be closed with practice, on the human end. And, like many AIs, it can have "jagged capability" gaps, with inhuman failure modes living in them - which humans can learn to exploit, but the robot wouldn't adapt to their exploitation because it doesn't learn continuously. Happened with various types of ML AIs designed to fight humans. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | hermitcrab 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
You can predict the movement of the ball (speed, direction, spin) based on the movement of the bat relative to the ball. What the rest of the player's body is doing is irrelevant to predicting what the ball will do - but relevant to predicting where they will be when you make the return shot. | |||||||||||||||||||||||
| ▲ | LeCompteSftware 4 hours ago | parent | prev [-] | ||||||||||||||||||||||
Yes, you're dead on:
It seems like the human players might be playing in a way that tacitly overestimates their AI opponents' intelligence and underestimates their skill. AFAIK the SOTA Go AIs are still vulnerable to certain very stupid adversarial strategies that wouldn't fool an amateur (albeit they're not something you'd come up with in normal play, more like a weird cheat code). I wonder if this could get ironed out with a bit more training against humans vs. simulation. | |||||||||||||||||||||||