Remix.run Logo
andrepd 2 hours ago

The simulacrum of a thing is not the thing! Not only is the "interesting!" unrelated to any "thought process", the whole """thinking""" output is not a representation of a thought process but merely a post-facto confabulation that sounds appropriately human-like.

pglevy an hour ago | parent | next [-]

Can't help but think of this I re-read recently from Nietzche:

> When I analyze the process that is expressed in the sentence, "I think," I find a whole series of daring assertions that would be difficult, perhaps impossible, to prove; for example, that it is I who think, that there must necessarily be something that thinks, that thinking is an activity and operation on the part of a being who is thought of as a cause, that there is an "ego," and, finally, that it is already determined what is to be designated by thinking—that I know what thinking is.

miyoji 42 minutes ago | parent [-]

That is saying something completely different from the comment that you're responding to, though.

mpyne 28 minutes ago | parent [-]

No, not really. That comment implies that the LLM is "faking" thinking.

But who actually knows how thinking even works in human brains? And assuming that LLMs work by a different mechanism, that this different mechanism can't actually also be considered "thinking"?

Human brains are realized in the same physics other things are so even if quantum level shenanigans are involved, it will ultimately reduce down to physical operations we can describe that lead to information operations. So why the assumption that LLM logic must necessarily be "mimicry" while human cognition has some real secret sauce to it still?

miyoji a minute ago | parent [-]

I agree that is what the commenter is saying.

It is not at all the same as what Nietzsche is saying in that passage. He's critiquing Kant and Descartes on philosophical grounds that have very little to do the definition of intelligence, or any possible relevance to whether or not LLMs are intelligent or "can think".

clejack an hour ago | parent | prev [-]

Yes, I recently got access to an annotations platform for llms, and I've found many projects associated with generating chain of thought outputs.

These COT outputs are the same sort of illusion as the general output. Someone is feeding them scripts of what it looks like to solve problems, so they generate outputs that look like problem solving.

I can't remember if I mentioned it previously on here, but an llm seems to be an extremely powerful synthesis machine. If you give it all of the individual components to solve a complex problem that humans might find intractable due to scope or bias, it may be able to crack the problem.