▲ | wkat4242 a day ago | |
I think it also really limits the AI to the context of human discourse which means it's hamstrung by our imagination, interests and knowledge. This is not where an AGI needs to go, it shouldn't copy and paste what we think. It should think on its own. But I view LLMs not as a path to AGI on their own. I think they're really great at being text engines and for human interfacing but there will need to be other models for the actual thinking. Instead of having just one model (the LLM) doing everything, I think there will be a hive of different more specific purpose models and the LLM will be how they communicate with us. That solves so many problems that we currently have by using LLMs for things they were never meant to do. |