▲ | adastra22 4 days ago | |
I could make the exact same argument about the activation loops happening in your brain when you typed this out. Transformer architectures are not replicas of human brain architrcture, but they are not categorically different either. | ||
▲ | nonameiguess 3 days ago | parent [-] | |
You could, but it would be a wrong argument. Animals, and presumably early enough humans for at least a while, had no language, yet still manage to interact with and understand the world. We don't learn solely by reading with ingestion of text being our only experience of anything. For what it's worth, this is not some kind of slam dunk fundamental permanent limitation of AI that is constructed from LLMs. Multi-modal learning gets you part of the way. Tool use gets you more of the way, enabling interaction with at least something. Embodiment and autonomy would get you further, but at some point, you need a true online learner that can update its own weights and not just simulate that with a very large context window. Whether or not this entails any limitation in capability (as in, there is anything a human or animal can actually do cognitively that a LLM AI can't) is an open question, but it is a real difference. The person you're responding to, no matter how similar their activation loops may be to software, didn't develop all behaviors and predictive modeling they currently do by reading and then had their brain frozen in time. |