▲ | sirsinsalot a day ago | |||||||
I don't see why a humans internal monologue isn't just a buildup of context to improve pattern matching ahead. The real answer is... We don't know how much it is or isn't. There's little rigor in either direction. | ||||||||
▲ | drowsspa a day ago | parent | next [-] | |||||||
I don't have the internal monologue most people seem to have: with proper sentences, an accent, and so on. I mostly think by navigating a knowledge graph of sorts. Having to stop to translate this graph into sentences always feels kind of wasteful... So I don't really get the fuzz about this chain of thought idea. To me, I feel like it should be better to just operate on the knowledge graph itself | ||||||||
| ||||||||
▲ | vidarh 14 hours ago | parent | prev | next [-] | |||||||
The irony of all this is that unlike humans - which we have no evidence to suggest can directly introspect lower level reasoning processes - LLMs could be given direct access to introspect their own internal state, via tooling. So if we want to, we can make them able to understand and reason about their own thought processes at a level no human can. But current LLM's chain of thought is not it. | ||||||||
▲ | misnome a day ago | parent | prev [-] | |||||||
Right but the actual problem is that the marketing incentives are so very strongly set up to pretend that there isn’t any difference that it’s impossible to differentiate between extreme techno-optimist and charlatan. Exactly like the cryptocurrency bubble. You can’t claim that “We don’t know how the brain works so I will claim it is this” and expect to be taken seriously. |