Remix.run Logo
sirsinsalot a day ago

I don't see why a humans internal monologue isn't just a buildup of context to improve pattern matching ahead.

The real answer is... We don't know how much it is or isn't. There's little rigor in either direction.

drowsspa a day ago | parent | next [-]

I don't have the internal monologue most people seem to have: with proper sentences, an accent, and so on. I mostly think by navigating a knowledge graph of sorts. Having to stop to translate this graph into sentences always feels kind of wasteful...

So I don't really get the fuzz about this chain of thought idea. To me, I feel like it should be better to just operate on the knowledge graph itself

vidarh 14 hours ago | parent [-]

A lot of people don't have internal monologues. But chain of thought is about expanding capacity by externalising what you're understood so far so you can work on ideas that exceeds what you're capable of getting in one go.

That people seem to think it reflects internal state is a problem, because we have no reason to think that even with internal monologue that the internal monologue accurately reflects our internal thought processes fuly.

There are some famous experiments with patients whose brainstem have been severed. Because the brain halves control different parts of the body, you can use this to "trick" on half of the brain into thinking that "the brain" has made a decision about something, such as choosing an object - while the researchers change the object. The "tricked" half of the brain will happily explain why "it" chose the object in question, expanding on thought processes that never happened.

In other words, our own verbalisation of our thought processes is woefully unreliable. It represents an idea of our thought processes that may or may not have any relation to the real ones at all, but that we have no basis for assuming is correct.

vidarh 14 hours ago | parent | prev | next [-]

The irony of all this is that unlike humans - which we have no evidence to suggest can directly introspect lower level reasoning processes - LLMs could be given direct access to introspect their own internal state, via tooling. So if we want to, we can make them able to understand and reason about their own thought processes at a level no human can.

But current LLM's chain of thought is not it.

misnome a day ago | parent | prev [-]

Right but the actual problem is that the marketing incentives are so very strongly set up to pretend that there isn’t any difference that it’s impossible to differentiate between extreme techno-optimist and charlatan. Exactly like the cryptocurrency bubble.

You can’t claim that “We don’t know how the brain works so I will claim it is this” and expect to be taken seriously.