Remix.run Logo
beders 2 hours ago

Thank you for putting it so succinctly.

I keep explaining to my peers, friends and family that what actually is happening inside an LLM has nothing to do with conscience or agency and that the term AI is just completely overloaded right now.

erichocean 2 hours ago | parent | next [-]

AI is exactly the right term: the machines can do "intelligence", and they do so artificially.

Just like we have machines that can do "math", and they do so artificially.

Or "logic", and they do so artificially.

I assume we'll drop the "artificial" part in my lifetime, since there's nothing truly artificial about it (just like math and logic), since it's really just mechanical.

No one cares that transistors can do math or logic, and it shouldn't bother people that transistors can predict next tokens either.

mayama an hour ago | parent [-]

> AI is exactly the right term: the machines can do "intelligence", and they do so artificially.

AI in pop culture doesn't mean that at all. Most people impression to AI pre-LLM craze was some form of media based on Asmiov laws of robotics. Now, that LLMs have taken over the world, they can define AI as anything they want.

ruszki an hour ago | parent [-]

In 2018, ie “pre-LLM”, the label “AI” was already stamped to everything, so I highly doubt that most people thought that their washing machines are sentient in any way. I remember this starkly, because my team was responsible at Ericsson (that time, about 120k employee) for one of the crucial step to have a model in production, and basically every single project wanted that stamp.

The shift in meaning has been slowly diluted more and more across decades.

rudhdb773b 2 hours ago | parent | prev [-]

> what actually is happening inside an LLM has nothing to do with conscience or agency

What makes you think natural brains are doing something so different from LLMs?

hedgehog an hour ago | parent | next [-]

Structurally a transformer model is so unrelated to the shape of the brain there's no reason to think they'd have many similarities. It's also pretty well established that the brain doesn't do anything resembling wholesale SGD (which to spell it is evidence that it doesn't learn in the same way).

hackinthebochs an hour ago | parent | next [-]

>Structurally a transformer model is so unrelated to the shape of the brain there's no reason to think they'd have many similarities.

Substrate dissimilarities will mask computational similarities. Attention surfaces affinities between nearby tokens; dendrites strengthen and weaken connections to surrounding neurons according to correlations in firing rates. Not all that dissimilar.

rudhdb773b an hour ago | parent | prev [-]

Sure the implementation details are different.

I suppose I should have asked by what definition of "consciousness and agency" are today's LLMs (with proper tooling) not meeting?

And if today's models aren't meeting your standard, what makes you think that future LLMs won't get there?

an hour ago | parent | prev | next [-]
[deleted]
qsera an hour ago | parent | prev | next [-]

For starters, natural brains have the innate ability to differentiate between things that it knows and things that it have no possibility of knowing...

rudhdb773b an hour ago | parent [-]

Modern LLMs are fairly good at that as well.

qsera 37 minutes ago | parent [-]

But that is bolted on and is not a core behavior.

krainboltgreene an hour ago | parent | prev [-]

Any amount of reading into how we understand brains and LLMs to work.