Remix.run Logo
t23414321 3 days ago

Wouldn't 'thinking' need to be updating the model of reality (LLM is not yet that, just words) - at every step doing again all that extensive calculations as when/to creating/approximating that/better model (learning) ?

Expecting machines to think is.. like magical thinking (but they are good at calculations indeed).

I wish we didn't use the word intelligence in context of LLMs - shortly there is Essence and the rest.. is only slope - into all possible combinations of Markov Chains - may they have sense or not I don't see how part of some calculation could recognize it, or that to be possible from inside (of calculation, that doesn't even consider that).

Aside of artificial knowledge (out of senses, experience, context lengths.. - confabulating but not knowing that), I wish to see an intelligent knowledge - made in kind of semantic way - allowed to expand using not yet obvious (but existing - not random) connections. I wouldn't expect it to think (humans think, digitals calculate). But I would expect it to have a tendency to be coming closer (not further) in reflecting/modeling reality and expanding implications.

Retric 3 days ago | parent [-]

Thinking is different than forming long term memories.

An LLM could be thinking in one of two ways. Either between adding each individual token, or collectively across multiple tokens. At the individual token level the physical mechanism doesn’t seem to fit the definition being essentially reflexive action, but across multiple tokens that’s a little more questionable especially as multiple approaches are used.

t23414321 3 days ago | parent [-]

An LLM ..is calculated ..from language (or from things being said by humans before being true or not). It's not some antropomorfic process what using the word thinking would suggest (to sell well).

> across multiple tokens

- but how many ? how many of them happen in sole person life ? How many in some calculation ? Does it matter, if a calculation doesn't reflect it but stay all the same ? (conversation with.. a radio - would it have any sense ?)

Retric 3 days ago | parent [-]

The general public have no issue saying a computer is thinking when you’re sitting there waiting for it to calculate a route or doing a similar process like selecting a chess move.

The connotation is simply an internal process of indeterminate length rather than one of reflexive length. So they don’t apply it when a GPU is slinging out 120 FPS in a first person shooter.

t23414321 3 days ago | parent [-]

That's right when saying selecting not calculating a chess move - assuming you are outside of Plato's cave (Popper).

But now, I see this: the truth is static and non-profit, but calculating something can be sold again and again, if you have a hammer (processing) everything looks like a nail, to sell well the word thinking had to be used instead of excuse for every time results being different (like the shadows) - then, we can have only things that let someone else keep making profits: JS, LLM, whatever.. (just not.. "XSLT" alike).

(yet, I need to study for your second sentence;)

3 days ago | parent | next [-]
[deleted]
t23414321 3 days ago | parent | prev [-]

.. and confront about Prolog or else in recent years - likes: "intended benefit requires an unreasonably (or impossibly?) smart compiler" (https://news.ycombinator.com/item?id=14441045) - isn't quite similar to LLMs, for that, requiring.. impossibly smart users ?? (there were few - assuming they got what they wanted . not peanuts)

3 days ago | parent [-]
[deleted]