| ▲ | AntiUSAbah 2 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||
But a LLM shows similiar effects. COCONUT, PCCoT, PLaT and co are directly linked to 'thinking in latent space'. yann lecun is working on this too, we have JEPA now. Also how do you describe or explain how an LLM is generating the next token when it should add a feature to an existing code base? In my opinion it has structures which allows it to create a temp model of that code. For sure a LLM lack the emotional component but what we humans also do, which indicates to me, that we are a lot closer to LLMs that we want to be, if you have a weird body feeling (stress, hot flashes, anger, etc.) your 'text area/llm/speech area' also tries to make sense of it. Its not always very good in doing so. That emotional body feeling is not that aligned with it and it takes time to either understand or ignore these types of inputs to the text area/llm/speech part of our brain. I'm open for looking back in 5 years and saying 'man that was a wild ride but no AGI' but at the current quality of LLMs and all the other architectures and type of models and money etc. being thrown at AGI, for now i don't see a ceiling at all. I only see crazy unseen progress. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | mort96 2 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
I don't understand what part of what I said you disagree with. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||