Remix.run Logo
rudhdb773b 2 hours ago

> what actually is happening inside an LLM has nothing to do with conscience or agency

What makes you think natural brains are doing something so different from LLMs?

hedgehog 2 hours ago | parent | next [-]

Structurally a transformer model is so unrelated to the shape of the brain there's no reason to think they'd have many similarities. It's also pretty well established that the brain doesn't do anything resembling wholesale SGD (which to spell it is evidence that it doesn't learn in the same way).

hackinthebochs an hour ago | parent | next [-]

>Structurally a transformer model is so unrelated to the shape of the brain there's no reason to think they'd have many similarities.

Substrate dissimilarities will mask computational similarities. Attention surfaces affinities between nearby tokens; dendrites strengthen and weaken connections to surrounding neurons according to correlations in firing rates. Not all that dissimilar.

rudhdb773b an hour ago | parent | prev [-]

Sure the implementation details are different.

I suppose I should have asked by what definition of "consciousness and agency" are today's LLMs (with proper tooling) not meeting?

And if today's models aren't meeting your standard, what makes you think that future LLMs won't get there?

2 hours ago | parent | prev | next [-]
[deleted]
qsera 2 hours ago | parent | prev | next [-]

For starters, natural brains have the innate ability to differentiate between things that it knows and things that it have no possibility of knowing...

rudhdb773b an hour ago | parent [-]

Modern LLMs are fairly good at that as well.

qsera 43 minutes ago | parent [-]

But that is bolted on and is not a core behavior.

krainboltgreene 2 hours ago | parent | prev [-]

Any amount of reading into how we understand brains and LLMs to work.