| ▲ | throwawayffffas 2 hours ago | |
As far as I understand the state of the art as of 2-3 years ago, there is no reinforcement learning at all at any point. At least not in the dynamics. What you do is you map the dynamics of your system, and solve them, that solution is a program that can produce torque inputs in joints to move the system in the way that you want. You then create a sequence of desirable intermediate and end states. The program then does it best to achieve these. The difference between atlas and the kawasaki robots, is that to achieve those states, the kawasaki robots use a program that attempts to stop all inertial rotations and movements in order to maintain full control of it's movements at all times. While atlas and the chinese robots leverage the inertia and gravity to achieve their movements, again you do that by solving a large set of equations, no ML required. The GP described a system of prerecorded motions, like a video game animation, if you try to do that, and have no controller to adjust to the real time environment, you are just going to tip over and continue doing the prerecorded motions. We saw that with the Russian robot last year. You can use a real human that does the choreography as a way for capturing the desired intermediate states that is the step that might require ML. | ||
| ▲ | scotty79 2 hours ago | parent [-] | |
> As far as I understand the state of the art as of 2-3 years ago, there is no reinforcement learning at all at any point. At least not in the dynamics. I think this might no longer be true. I don't think this years dance routine would have been possible without RL given how crappy robots were 2-3 years ago. | ||