Remix.run Logo
GistNoesis 4 days ago

The practical reason is speed and low resources consumption.

It's solving the Inverse Kinematics problem in the order of microseconds.

It's almost a closed form solution.

Typically it needs to be inside an embedded device, where memory is sparse and not allocated.

It will probably be inside the control loop of a "Real Time OS" or chip, because the end effector position is usually what matter. So your commands and PID setpoints are expressed in this basis and you need to transform them into angle commands for your servo-motors 10-100K times per second.

Most machine learning frameworks are not running on real time, which introduce jitter in the command times. Typically the control loop are more around 100hz max. In a cascaded controller fashion, what you typically then do is use slow systems (non-RTOS) with machine learning for high level tasks and transmit the low level commands to the controller given in desired positions, desired velocities, desired acceleration to follow the path planned trajectory, and hope nothing too brutal happen (like your spindle at the end of your arm encountering more resistance, or your robot leg hitting the ground or touching an obstacle) in the hundredths of second before you can get feedback.