| ▲ | madhadron 13 hours ago | ||||||||||||||||||||||
All these transforms are switching to an eigenbasis of some differential operator (that usually corresponds to a differential equation of interest). Spherical harmonics, Bessel and Henkel functions, which are the radial versions of sines/cosines and complex exponential, respectively, and on and on. The next big jumps were to collections of functions not parameterized by subsets of R^n. Wavelets use a tree shapes parameter space. There’s a whole, interesting area of overcomplete basis sets that I have been meaning to look into where you give up your basis functions being orthogonal and all those nice properties in exchange for having multiple options for adapting better to different signal characteristics. I don’t think these transforms are going to be relevant to understanding neural nets, though. They are, by their nature, doing something with nonlinear structures in high dimensions which are not smoothly extended across their domain, which is the opposite problem all our current approaches to functional analysis deal with. | |||||||||||||||||||||||
| ▲ | srean 11 hours ago | parent | next [-] | ||||||||||||||||||||||
You may well be right about neural networks. Sometimes models that seem nonlinear turns linear if those nonlinearities are pushed into the basis functions, so one can still hope. For GPT like models, I see sentences as trajectories in the embedded space. These trajectories look quite complicated and no obvious from their geometrical stand point. My hope is that if we get the coordinate system right, we may see something more intelligible going on. This is just a hope, a mental bias. I do not have any solid argument for why it should be as I describe. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | fc417fc802 13 hours ago | parent | prev [-] | ||||||||||||||||||||||
Note that I'm not great at math so it's possible I've entirely misunderstood you. Here's an example of directly leveraging a transform to optimize the training process. ( https://arxiv.org/abs/2410.21265 ) And here are two examples that apply geometry to neural nets more generally. ( https://arxiv.org/abs/2506.13018 ) ( https://arxiv.org/abs/2309.16512 ) | |||||||||||||||||||||||
| |||||||||||||||||||||||