Remix.run Logo
energy123 2 days ago

Every vision task needs edge/contrast/color detectors and these should be mostly the same across ViTs, needing only a rotation and scaling in the subspace. Likewise with language tasks and encoding the basic rules of language which are the same regardless of application. So it is no surprise to see intra-modality shared variation.

The surprising thing is inter-modality shared variation. I wouldn't have bet against it but I also wouldn't have guessed it.

I would like to see model interpretability work into whether these subspace vectors can be interpreted as low level or high level abstractions. Are they picking up low level "edge detectors" that are somehow invariant to modality (if so, why?) or are they picking up higher level concepts like distance vs. closeness?

TheOtherHobbes a day ago | parent [-]

It hints there may be common higher-level abstraction and compression processes in human consciousness.

The "human" part of that matters. This is all human-made data, collected from human technology, which was created to assist human thinking and experience.

So I wonder if this isn't so much about universals or Platonic ideals. More that we're starting to see the outlines of the shapes that define - perhaps constrict - our own minds.