| ▲ | tsurba 2 days ago | |||||||
Many discriminative models converge to same representation space up to a linear transformation. Makes sense that a linear transformation (like PCA) would be able to undo that transformation. https://arxiv.org/abs/2007.00810 Without properly reading the linked article, if thats all this is, not a particularly new result. Nevertheless this direction of proofs is imo at the core of understanding neural nets. | ||||||||
| ▲ | mlpro 2 days ago | parent [-] | |||||||
It's about weights/parameters, not representations. | ||||||||
| ||||||||