| ▲ | monkeycantype a day ago | |
yes I agree it's not the angle of the article, but it is my entry point into the idea/concern/unanswered question at the end of the article “My worry is not that these models are similar to us. It’s that we are similar to these models.” - that the enormous difference in the medium and mechanics or our minds and llm's might not be that important. before i go any further, let me first reference The Dude:
I’m down with the idea that LLM’s have been especially successful because they ‘piggyback on language’ – our tool and protocol for structuring, compressing, and serialising thought, which means it has been possible to train LLM’s on compressed patterns of actual thought and have them make new language that sure looks like thought, without any direct experience of the concepts being manipulated, and if they do it well enough we will do the decompression, fleshing out the text with our experiential context.
But I suspect that there are parts of my mind that also deal with concepts in an abstract way, far from any experiential context of the concept, just like the deeper layers of a neural network. I’m open to the idea, that just as the sparse matrix of an LLM is encoding connection between concepts without explicitly encoding edges, I think there will be multiple ways that we can look as the structure of an AI model and at our anatomy so that they are a squint and a transformation function away interesting overlaps. that will lead to and a kind of 'god of the gaps' scenario in which we conceptually carve out pieces of our minds as, 'oh the visual cortext is just an X', and deep questions about what we are. | ||