| ▲ | srean a day ago | |||||||
You may well be right about neural networks. Sometimes models that seem nonlinear turns linear if those nonlinearities are pushed into the basis functions, so one can still hope. For GPT like models, I see sentences as trajectories in the embedded space. These trajectories look quite complicated and no obvious from their geometrical stand point. My hope is that if we get the coordinate system right, we may see something more intelligible going on. This is just a hope, a mental bias. I do not have any solid argument for why it should be as I describe. | ||||||||
| ▲ | nihzm a day ago | parent | next [-] | |||||||
> Sometimes models that seem nonlinear turns linear if those nonlinearities are pushed into the basis functions, so one can still hope. That idea was pushed to its limit by the Koopman operator theory. The argument sounds quite good at first, but unfortunately it can’t really work for all cases in its current formulation [1]. | ||||||||
| ||||||||
| ▲ | madhadron a day ago | parent | prev [-] | |||||||
I’m not sure what you mean by a change of basis making a nonlinear system linear. A linear system is one where solutions add as elements of a vector space. That’s true no matter what basis you express it in. | ||||||||
| ||||||||