Remix.run Logo
root_axis 10 hours ago

I'm not exactly sure what you mean. Could you please elaborate further?

a1j9o94 10 hours ago | parent [-]

Not the person you're responding to, but I think there's a non trivial argument to make that our thoughts are just auto complete. What is the next most likely word based on what you're seeing. Ever watched a movie and guessed the plot? Or read a comment and know where it was going to go by the end?

And I know not everyone thinks in a literal stream of words all the time (I do) but I would argue that those people's brains are just using a different "token"

root_axis 9 hours ago | parent | next [-]

There's no evidence for it, nor any explanation for why it should be the case from a biological perspective. Tokens are an artifact of computer science that have no reason to exist inside humans. Human minds don't need a discrete dictionary of reality in order to model it.

Prior to LLMs, there was never any suggestion that thoughts work like autocomplete, but now people are working backwards from that conclusion based on metaphorical parallels.

LiKao 8 hours ago | parent | next [-]

There actually was quite a lot of suggestion that thoughts work like autocomplete. A lot of it was just considered niche, e.g. because the mathematical formalisms were beyond what most psychologist or even cognitive scientists would deem usefull.

Predictive coding theory was formalized back around 2010 and traces it roots up to theories by Helmholtz from 1860.

Predictive coding theory postulates that our brains are just very strong prediction machines, with multiple layers of predictive machinery, each predicting the next.

red75prime 8 hours ago | parent | prev | next [-]

There are so many theories regarding human cognition that you can certainly find something that is close to "autocomplete". A Hopfield network, for example.

Roots of predictive coding theory extend back to 1860s.

Natalia Bekhtereva was writing about compact concept representations in the brain akin to tokens.

A4ET8a8uTh0_v2 5 hours ago | parent | prev [-]

<< There's no evidence for it

Fascinating framing. What would you consider evidence here?

9dev 9 hours ago | parent | prev [-]

You, and OP, are taking an analogy way too far. Yes, humans have the mental capability to predict words similar to autocomplete, but obviously this is just one out of a myriad of mental capabilities typical humans have, which work regardless of text. You can predict where a ball will go if you throw it, you can reason about gravity, and so much more. It’s not just apples to oranges, not even apples to boats, it’s apples to intersubjective realities.

A4ET8a8uTh0_v2 5 hours ago | parent | next [-]

I don't think I am. To be honest, as ideas goes and I swirl it around that empty head of mine, this one ain't half bad given how much immediate resistance it generates.

Other posters already noted other reasons for it, but I will note that you are saying 'similar to autocomplete, but obviously' suggesting you recognize the shape and immediately dismissing it as not the same, because the shape you know in humans is much more evolved and co do more things. Ngl man, as arguments go, it sounds to me like supercharged autocomplete that was allowed to develop over a number of years.

9dev 3 hours ago | parent [-]

Fair enough. To someone with a background in biology, it sounds like an argument made by a software engineer with no actual knowledge of cognition, psychology, biology, or any related field, jumping to misled conclusions driven only by shallow insights and their own experience in computer science.

Or in other words, this thread sure attracts a lot of armchair experts.

quesera 17 minutes ago | parent [-]

> with no actual knowledge of cognition, psychology, biology

... but we also need to be careful with that assertion, because humans do not understand cognition, psychology, or biology very well.

Biology is the furthest developed, but it turns out to be like physics -- superficially and usefully modelable, but fundamental mysteries remain. We have no idea how complete our models are, but they work pretty well in our standard context.

If computer engineering is downstream from physics, and cognition is downstream from biology ... well, I just don't know how certain we can be about much of anything.

> this thread sure attracts a lot of armchair experts.

"So we beat on, boats against the current, borne back ceaselessly into our priors..."

LiKao 8 hours ago | parent | prev [-]

Look up predictive coding theory. According to that theory, what our brain does is in fact just autocomplete.

However, what it is doing is layered autocomplete on itself. I.e. one part is trying to predict what the other part will be producing and training itself on this kind of prediction.

What emerges from this layered level of autocompletes is what we call thought.