Remix.run Logo
8note 13 hours ago

i dont think this is a meaningful distinction.

it knows the past tokens because theyre part of the input for predicting the next token. its part of the model architecture that it knows it.

if that isnt knowing, people dont know how to walk, only how to move limbs, and not even that, just a bunch of neurons firing

gopher_space 9 hours ago | parent | next [-]

How close are you to saying that a repair manual "knows" how to fix your car? I think the conversation here is really around word choice and anthropomorphization.

handoflixue 6 hours ago | parent [-]

The problem is, people think word choice influences capabilities: when people redefine "reasoning" or "consciousness" or so on as something only the sacred human soul can do, they're not actually changing what an LLM is capable of doing, and the machine will continue generating "I can't believe it's not Reasoning™" and providing novel insights into mathematics and so forth.

Similarly, the repair manual cannot reason about novel circumstances, or apply logic to fill in gaps. LLMs quite obviously can - even if you have to reword that sentence slightly.

Jensson 10 hours ago | parent | prev [-]

It doesn't know if it produced that token itself or if someone else did.