Remix.run Logo
halfnhalf 6 hours ago

Don't table tennis players learn to predict how the ball will act based on their opponents movements? Seems like if they aren't able to do that with a robot opponent (who doesn't look or behave like a human) then they wouldn't be able to play at their best.

ACCount37 5 hours ago | parent | next [-]

I do expect this to have a "novelty edge" over human opponents - which can be closed with practice, on the human end.

And, like many AIs, it can have "jagged capability" gaps, with inhuman failure modes living in them - which humans can learn to exploit, but the robot wouldn't adapt to their exploitation because it doesn't learn continuously. Happened with various types of ML AIs designed to fight humans.

Ferret7446 4 hours ago | parent | next [-]

Only if you assume the AI can't improve. Otherwise, AI has a fundamental edge over humans in that they don't get old and die, and can be copied perfectly without an expensive retraining period

zingar 5 hours ago | parent | prev [-]

Chess players learned to exploit chess computers’ weaknesses in the beginning too, but they can’t any longer. This version of the robot might not learn continuously, but the next will be better.

cool_dude85 2 hours ago | parent [-]

I believe there are still some echoes of the concept. Even top engines will play certain grandmaster draw lines unless told more or less explicitly not to. So if you were playing a match against Stockfish you'd want to play the Berlin draw as White every time, for example.

hermitcrab 5 hours ago | parent | prev | next [-]

You can predict the movement of the ball (speed, direction, spin) based on the movement of the bat relative to the ball. What the rest of the player's body is doing is irrelevant to predicting what the ball will do - but relevant to predicting where they will be when you make the return shot.

LeCompteSftware 4 hours ago | parent | prev [-]

Yes, you're dead on:

  Rui Takenaka, an elite-level player who has won and lost matches against Ace, said in comments provided by Sony AI: "When it came to my serve, if I used a serve with complex spin, Ace also returned the ball with complex spin, which made it difficult for me. But when I used a simple serve - what we call a knuckle serve - Ace returned a simpler ball. That made it easier for me to attack on the third shot, and I think that was the key reason why I was able to win."
It seems like the human players might be playing in a way that tacitly overestimates their AI opponents' intelligence and underestimates their skill. AFAIK the SOTA Go AIs are still vulnerable to certain very stupid adversarial strategies that wouldn't fool an amateur (albeit they're not something you'd come up with in normal play, more like a weird cheat code). I wonder if this could get ironed out with a bit more training against humans vs. simulation.