| ▲ | mlpro 2 days ago | |||||||
Not really. If the models are trained on different dataset - like one ViT trained on satellite images and another on medical X-rays - one would expect their parameters, which were randomly initialized to be completely different or even orthogonal. | ||||||||
| ▲ | energy123 2 days ago | parent | next [-] | |||||||
Every vision task needs edge/contrast/color detectors and these should be mostly the same across ViTs, needing only a rotation and scaling in the subspace. Likewise with language tasks and encoding the basic rules of language which are the same regardless of application. So it is no surprise to see intra-modality shared variation. The surprising thing is inter-modality shared variation. I wouldn't have bet against it but I also wouldn't have guessed it. I would like to see model interpretability work into whether these subspace vectors can be interpreted as low level or high level abstractions. Are they picking up low level "edge detectors" that are somehow invariant to modality (if so, why?) or are they picking up higher level concepts like distance vs. closeness? | ||||||||
| ||||||||
| ▲ | crooked-v 2 days ago | parent | prev [-] | |||||||
Now I wonder how much this "Universal Subspace" corresponds to the same set of scraped Reddit posts and pirated books that apparently all the bigcorps used for model training. Is it 'universal' because it's universal, or because the same book-pirating torrents got reused all over? | ||||||||