Remix.run Logo
AntiUSAbah 3 hours ago

You state how you think and plan and have thoughts on how to do things etc. and i assumed you mention your way of thinking because you assume a LLM is not doing any of it.

I showed than counter examples.

mort96 3 hours ago | parent [-]

I don't think you showed counter examples? Or can you link me to a paper which describes a language model thinking without predicting tokens?

AntiUSAbah 2 hours ago | parent | next [-]

My second sentence references all these papers:

"COCONUT, PCCoT, PLaT and co are directly linked to 'thinking in latent space'. yann lecun is working on this too, we have JEPA now."

mort96 2 hours ago | parent [-]

And it does this thinking without producing tokens?

AntiUSAbah an hour ago | parent [-]

yes.

Btw. just because you have to do something with the LLM to trigger the flow of information through the model, doesn't mean it can't think. It only means that we have to build an architecture around the model or build it into the models base architecture to enable more thinking.

We do not know how the brain architecture is setup for this. We could have sub agents or we can be a Mixture of Experts type of 'model'.

There is also work going on in combining multimodal inputs and diffusion models which look complelty different from a output pov etc.

If you look how a LLM does math, Anthropic showed in a blog article, that they found similiar structures for estimating numbers than how a brain does.

Another experiment from a person was to clone layers and just adding them beneth the original layer. This improved certain tasks. My assumption here is, that it lengthen and strengthen kind of a thinking structure.

But because using LLMs are still so good and still return relevant improvements, i think a whole field of thinking in this regard is still quite unexplored.

CamperBob2 2 hours ago | parent | prev [-]

If you ask a model to multiply 322423324 by 8675309232 without using tools, it's interesting to think about how it does it. Where are the intermediate results being maintained?

"In context" is the obvious answer... but if you view the chain of thought from a reasoning model, it may have little or nothing to do with arriving at the correct answer. It may even be complete nonsense. The model is working with tokens in context, but internally the transformer is maintaining some state with those tokens that seems to be independent of the superficial meanings of the tokens. That is profoundly weird, and to me, it makes it difficult to draw a line in the sand between what LLMs can do and what human brains can do.

2 hours ago | parent [-]
[deleted]