▲ | mystraline 3 days ago | ||||||||||||||||
> BP's modern version (also called the reverse mode of automatic differentiation) So... Automatic integration? Proportional, integrative, derivative. A PID loop sure sounds like what they're talking about. | |||||||||||||||||
▲ | eigenspace 3 days ago | parent | next [-] | ||||||||||||||||
Reverse move automatic differentiation is not integration. It's still differentiation, but just a different method of calculating the derivative than the one you'd think to do by hand. It basically just applies the chain rule in the opposite order from what is intuitive to people. It has a lot more overhead than regular forwards mode autodiff because you need to cache values from running the function and refer back to them in reverse order, but the advantage is that for function with many many inputs and very few outputs (i.e. the classic example is calculating the gradient of a scalar function in a high dimensional space like for gradient descent), it is algorithmically more efficient and requires only one pass through the primal function. On the other hand, traditional forwards mode derivatives are most efficient for functions with very few inputs, but many outputs. It's essentially a duality relationship. | |||||||||||||||||
| |||||||||||||||||
▲ | imtringued 3 days ago | parent | prev | next [-] | ||||||||||||||||
Forward mode automatic differentiation creates a formula for each scalar derivative. If you have a billion parameters you have to calculate each derivative from scratch. As the name implies, the calculation is done forward. Reverse mode automatic differentiation starts from the root of the symbolic expression and calculates the derivative for each subexpression simultaneously. The difference between the two is like the difference between calculating the Fibonacci sequence recursively without memoization and calculating it iteratively. You avoid doing redundant work over and over again. | |||||||||||||||||
▲ | digikata 3 days ago | parent | prev | next [-] | ||||||||||||||||
There are large bodies of work for optimization of state space control theory that I strongly suspect as a lot of crossover for AI, and at least has very similar mathematical structure. e.g. optimization of state space control coefficients looks something like training a LLM matrix... | |||||||||||||||||
| |||||||||||||||||
▲ | 3 days ago | parent | prev [-] | ||||||||||||||||
[deleted] |