▲ | petters 15 hours ago | ||||||||||||||||
Isn't that what Deepmind did 12 years ago? | |||||||||||||||||
▲ | hombre_fatal 15 hours ago | parent | next [-] | ||||||||||||||||
He points that out in his notes and says DeepMind needed specialized training and/or 200M frames of training just to kinda play one game. | |||||||||||||||||
| |||||||||||||||||
▲ | willvarfar 15 hours ago | parent | prev | next [-] | ||||||||||||||||
Playing Atari games makes it easy to benchmark and compare and contrast his future research with Deepmind and more recent efforts. | |||||||||||||||||
▲ | moralestapia 15 hours ago | parent | prev [-] | ||||||||||||||||
IIRC Deepmind (and OpenAI and ...) have done this on software-only setups (emulators, TAS, etc); while this one has live input and actuators in the loop, so, kind of the same thing but operating in the physical realm. I do agree that it is not particularly groundbreaking, but it's a nice "hey, here's our first update". | |||||||||||||||||
|