Remix.run Logo
mossTechnician 2 days ago

I've come to the same conclusion. "AI" was just the marketing term for a large language model in the form of a chatbot, which harkened to sci-fi characters like Data or GLaDOS. It can look impressive, it can often give correct answers, but it's just a bunch of next word predictions stacked on top of each other. The word "AI" has deviated so much from this older meaning that a second acronym, "AGI", had to be created to represent what "AI" once did.

The new "reasoning" or "chain of thought" AIs are similarly just a bunch of conventional LLM inputs and outputs stacked on top of each other. I agree with the GP that it feels a bit magical at first, but the opportunity to run a DeepSeek distillation on my PC - where each step of the process is visible - removed quite a bit of the magic behind the curtain.

MostlyStable 2 days ago | parent | next [-]

I always find the "It's just..." arguments amusing. It presupposes that we know what any intelligence, including our own "is". Human intelligence can just as trivially be reduced down to "it's just a bench of chemical/electrical gradients".

We don't understand how our (or any) intelligence functions, so acting like a next-token predictor can't be "real" intelligence seems overly confident.

mossTechnician a day ago | parent | next [-]

In theory, I don't mind waxing philosophical about the nature of humanity. But in practice, I regularly become uncomfortable when I see people compare (for example) the waste output of an LLM chatbot to a human being, with their own carbon footprint, who needs to eat and breathe. I worry because it suggests the additional environmental waste of the LLM is justified, and almost insinuates that the human is a waste on society if their output doesn't exceed the LLM.

But if the LLM were intelligent and sentient, and it was our equal... I believe it is worse than slavery to keep it imprisoned the way it is: unconscious, only to be jolted awake, asked a question, and immediately rendered unconscious again upon producing a result.

deadbabe a day ago | parent [-]

Worrying about if an LLM is intelligent and sentient is not much different than worrying the same thing about an AWS lambda function.

tracerbulletx 2 days ago | parent | prev | next [-]

Ugh you just fancy auto-completed a sequence of electrical signals from your eyes into a sequence of nerve impulses in your fingers to say that, and how do I know you're not hallucinating, last week a different human told me an incorrect fact and they were totally convinced they were right!

adamredwoods 2 days ago | parent | next [-]

Humans base their "facts" on consensus-driven education and knowledge. Anything that falls into a range of "I think this is true" or "I read this somewhere" or "I have a hunch" is more acceptable for a human than an LLM. Also humans are more often to encapsulate their uncertain answers with phrasing. LLMs can't do this, they don't have a way to track answers that are possibly incorrect.

deadbabe 2 days ago | parent | prev [-]

The human believes it was right.

The LLM doesn’t believe it was right or wrong. It doesn’t believe anything anymore than a mathematical function believes 2+2=4.

tracerbulletx 2 days ago | parent | next [-]

Obviously LLMs are missing many important properties of the brain like spatial, time, and chemical factors, as well as many different inter connected feedback networks to different types of neural networks that go well beyond what llms do.

Beyond that, they are the same thing. Signal Input -> Signal Output

I do not know what consciousness actually is so I will not speak to what it will take for a simulated intelligence to have one.

Also I never used the word believes, I said convinced, if it helps I can say "acted in a way as if it had high confidence in its output"

cratermoon a day ago | parent [-]

Obviously sand is missing many important properties of integrated circuits, like semiconductivity, electric interconnectivity, transistors, and p-n junctions.

Beyond that, they are the same thing.

istjohn 2 days ago | parent | prev [-]

Can you support that assertion? What's your evidence?

cratermoon a day ago | parent [-]

not the OP but https://www.tandfonline.com/doi/abs/10.1080/0951508070123951...

eamsen 2 days ago | parent | prev | next [-]

Completely agree with this statement.

I would go further, and say we don't understand how next-token predictors work either. We understand the model structure, just as we do with the brain, but we don't have a complete map of the execution patterns, just as we do not with the brain.

Predicting the next token can be as trivial as a statistical lookup or as complex as executing a learned reasoning function.

My intuition suggests that my internal reasoning is not based on token sequences, but it would be impossible to convey the results of my reasoning without constructing a sequence of tokens for communication.

th0ma5 2 days ago | parent | prev | next [-]

That's literally the definition of unfalsifiable though. It is equally valid to say that anything claiming to be "real" intelligence is overly confident.

unclebucknasty 2 days ago | parent | prev [-]

That's an interesting take. I agreed with your first paragraph, but didn't expect the conclusion.

From my perspective, the statement that these technologies are taking us to AGI is the overly confident part, particularly WRT the same lack of understanding you mentioned.

I mean, from just a purely odds perspective, what are the chances that human intelligence is, of all things, a simple next-token predictor?

But, beyond that, I do believe that we observably know that it's much more than that.

Terr_ 2 days ago | parent | prev | next [-]

> which harkened to sci-fi characters like Data or GLaDOS.

There's a truth in there: Today's chatbots literally are characters inside a modern fictional sci-fi story! Some regular code is reading the story, acting out the character's lines, we humans are being tricked into thinking there's a real entity somewhere.

The real LLM is just a Make Document Longer machine. It never talks to anybody, and has no ego, and it sits in back being fed documents that look like movie-scripts. These documents are prepped to contain fictional characters, such as a User (whose lines are text taken unwittingly from a real human) and a Chatbot with incomplete lines.

The Chatbot character is a fiction, because you can simply change its given name to Vegetarian Dracula and suddenly it gains a penchant for driving its fangs into tomatoes.

> The new "reasoning" or "chain of thought" AIs are similarly just a bunch of conventional LLM inputs and outputs stacked on top of each other.

Continuing that framing: They've changed the style of movie script to film noir, where the fictional character is making a parallel track of unvoiced remarks.

While this helps keep the story from going off the rails, it doesn't mean a qualitative leap in any "thinking" going on.

kridsdale1 a day ago | parent [-]

I know this is true, and I like your perspective.

fuzzfactor a day ago | parent | prev | next [-]

>a DeepSeek distillation on my PC - where each step of the process is visible - removed quite a bit of the magic behind the curtain.

I always figured that by the time the 1990's came along, there would finally be powerful enough PC's so that an insightful enough individual would eventually be able to use one PC to produce such intelligent behavior that it made that PC orders of magnitude more useful. In a way that no one could deny there was some intelligence there, even if it was not the strongest intelligence. And the closer you looked and became familiar with the underhood processing, the more convinced you became.

And that would be what you then scale, the intelligence itself, even if weak to start with it should definitely be able to get smarter at handling the same limited data if the intelligence was what was scaled more so than the hardware & data.

saalweachter 2 days ago | parent | prev | next [-]

I like to describe them as a very powerful tool for quickly creating impressive demos.

danielbln 2 days ago | parent | prev | next [-]

Simple systems layered on top of each other is how we got to human intelligence (presumably).

mrtesthah 2 days ago | parent | prev | next [-]

“AI” began as a buzzword invented by Marvin Minsky at MIT in grant proposals to justify DoD funding for CS research. It was never equivalent to AGI in meaning.

sharemywin 19 hours ago | parent | prev | next [-]

each level above the first is predicting concepts right.

cratermoon a day ago | parent | prev | next [-]

I'm starting to examine genai products within the framework of a confidence game.

unclebucknasty 2 days ago | parent | prev [-]

>AGI", had to be created to represent what "AI" once did.

And, "AGI" has already been downgraded, with "superintelligence" being the new replacement.

"Super-duper" is clearly next.