| ▲ | hn_throwaway_99 2 days ago | |
I read the abstract (not the whole paper) and the great summarizing comments here. Beyond the practical implications of this (i.e. reduced training and inference costs), I'm curious if this has any consequences for "philosophy of the mind"-type of stuff. That is, does this sentence from the abstract, "we identify universal subspaces capturing majority variance in just a few principal directions", imply that all of these various models, across vastly different domains, share a large set of common "plumbing", if you will? Am I understanding that correctly? It just sounds like it could have huge relevance to how various "thinking" (and I know, I know, those scare quotes are doing a lot of work) systems compose their knowledge. | ||
| ▲ | themaxice 2 days ago | parent | next [-] | |
Somewhat of a tangent, but if you enjoy the philosophy of AI and mathematics, I highly recommend reading Gödel, Escher, Bach: an Eternal Golden Braid by D. Hofstadter. It is primarily about the Incompleteness Theorem, but does touch on AI and what we understand as being an intelligence | ||
| ▲ | gedy 2 days ago | parent | prev [-] | |
It could, though maybe "just" in a similar way that human brains are the same basic structure. | ||