Remix.run Logo
dvt 13 hours ago

> Of course it knows what it output a token ago...

It doesn't know anything. It has a bunch of weights that were updated by the previous stuff in the token stream. At least our brains, whatever they do, certainly don't function like that.

Borealid 13 hours ago | parent | next [-]

I don't know anything (or even much) about how our brains function, but the idea of a neuron sending an electrical output when the sum of the strengths of its inputs exceeds some value seems to be me like "a bunch of weights" getting repeatedly updated by stimulus.

To you it might be obvious our brains are different from a network of weights being reconfigured as new information comes in; to me it's not so clear how they differ. And I do not feel I know the meaning of the word "know" clearly enough to establish whether something that can emit fluent text about a topic is somehow excluded from "knowing" about it through its means of construction.

thrownthatway 12 hours ago | parent | prev | next [-]

Wait till you learn how human memory works.

Every time you recall a memory it is modified, every time you verbalise a memory it is modified even more so.

Eye-witness accounts are notoriously unreliable, people who witness the same events can have shockingly differing versions.

Memories are modified when new information, real or fabricated, is added.

It’s entirely possible to convince people to recall events that never occurred.

Which of your memories are you certain are of real occurrences, or memories of dreams?

dvt 10 hours ago | parent [-]

You're making an argument Descartes formalized in the 1600s (and folks have been making long before him). It's a cute philosophical puzzle, but we assume that there's no Descartes' Demon fiddling with our thoughts and that we have a continuous and personal inner life that manifests itself, at least in part, through our conscious experience.

thrownthatway 8 hours ago | parent [-]

What are talking about?

These are all provable, proven facts.

8note 13 hours ago | parent | prev [-]

i dont think this is a meaningful distinction.

it knows the past tokens because theyre part of the input for predicting the next token. its part of the model architecture that it knows it.

if that isnt knowing, people dont know how to walk, only how to move limbs, and not even that, just a bunch of neurons firing

gopher_space 9 hours ago | parent | next [-]

How close are you to saying that a repair manual "knows" how to fix your car? I think the conversation here is really around word choice and anthropomorphization.

handoflixue 6 hours ago | parent [-]

The problem is, people think word choice influences capabilities: when people redefine "reasoning" or "consciousness" or so on as something only the sacred human soul can do, they're not actually changing what an LLM is capable of doing, and the machine will continue generating "I can't believe it's not Reasoning™" and providing novel insights into mathematics and so forth.

Similarly, the repair manual cannot reason about novel circumstances, or apply logic to fill in gaps. LLMs quite obviously can - even if you have to reword that sentence slightly.

Jensson 10 hours ago | parent | prev [-]

It doesn't know if it produced that token itself or if someone else did.