Remix.run Logo
slightwinder 2 days ago

> It won't solve an original problem for which it has no prior context to "complete" an approximated solution with.

Neither can humans. We also just brute force "autocompletion" with our learned knowledge and combine it to new parts, which we then add to our learned knowledge to deepen the process. We are just much, much better at this than AI, after some decades of training.

And I'm not saying that AI is fully there yet and has solved "thinking". IMHO it's more "pre-thinking" or proto-intelligence.. The picture is there, but the dots are not merging yet to form the real picture.

> It does not actually add 1+2 when you ask it to do so. it does not distinguish 1 from 2 as discrete units in an addition operation.

Neither can a toddler nor an animal. The level of ability is irrelevant for evaluating its foundation.

cpt_sobel 2 days ago | parent | next [-]

> Neither can humans. We also just brute force "autocompletion"

I have to disagree here. When you are tasked with dividing 2 big numbers you most certainly don't "autocomplete" (with the sense of finding the most probable next tokens, which is what an LLM does), rather you go through set of steps you have learned. Same as with the strawberry example, you're not throwing guesses until something statistically likely to be correct sticks.

slightwinder 2 days ago | parent | next [-]

Humans first start with recognizing the problem, then search through their list of abilities to find the best skill for solving it, thus "autocomplete" their inner shell's commandline, before they start execution, to stay with that picture. Common AIs today are not much different from this, especially with reasoning-modes.

> you're not throwing guesses until something statistically likely to be correct sticks.

What do you mean? That's exactly how many humans are operating with unknown situations/topics. If you don't know, just throw punches and look what works. Of course, not everyone is ignorant enough to be vocal about this in every situation.

empath75 2 days ago | parent | prev [-]

> I have to disagree here. When you are tasked with dividing 2 big numbers you most certainly don't "autocomplete" (with the sense of finding the most probable next tokens, which is what an LLM does), rather you go through set of steps you have learned.

Why do you think that this is the part that requires intelligence, rather than a more intuitive process? Because they have had machines that can do this mechanically for well over a hundred years.

There is a whole category of critiques of AI of this type: "Humans don't think this way, they mechanically follow an algorithm/logic", but computers have been able to mechanically follow algorithms and perform logic from the beginning! That isn't thinking!

cpt_sobel a day ago | parent [-]

Good points - mechanically just following algorithms isn't thinking, and neither is "predicting the next tokens".

But would a combination of the 2 then be close to what we define as thinking though?

notepad0x90 2 days ago | parent | prev | next [-]

humans, and even animals track different "variables" or "entities" and distinct things with meaning and logical properties which they then apply some logical system on those properties to compute various outputs. LLMs see everything as one thing, in case of chat-completion models, they're completing text. in case of image generation, they're completing an image.

Look at it this way, two students get 100% on an exam. One learned the probability of which multiple choice options have the likelihood of being most correct based on how the question is worded, they have no understanding of the topics at hand, and they're not performing any sort of topic-specific reasoning. They're just good at guessing the right option. The second student actually understood the topics, reasoned, calculated and that's how they aced the exam.

I recently read about a 3-4 year old that impressed their teacher by reading perfectly a story book like an adult. it turns out, their parent read it to them so much, they can predict based on page turns and timing the exact words that need to be spoken. The child didn't know what an alphabet, word,etc.. was they just got so good at predicting the next sequence.

That's the difference here.

slightwinder 2 days ago | parent [-]

I'd say, they are all doing the same, just in different domains and level of quality. "Understanding the topic" only means they have specialized, deeper contextualized information. But at the end, that student also just autocompletes their memorized data, with the exception that some of that knowledge might trigger a program they execute to insert the result in their completion.

The actual work is in gaining the knowledge and programs, not in accessing and executing them. And how they operate, on which data, variables, objects, worldview or whatever you call it, this might make a difference in quality and building speed, but not for the process in general.

notepad0x90 2 days ago | parent [-]

> only means they have specialized, deeper contextualized information

no, LLMs can have that contextualized information. understanding in a reasoning sense means classifying the thing and developing a deterministic algorithm to process it. If you don't have a deterministic algorithm to process it, it isn't understanding. LLMs learn to approximate, we do that too, but then we develop algorithms to process input and generate output using a predefined logical process.

A sorting algorithm is a good example, when you compare that with an LLM sorting a list. they both may have correct outcome, but the sorting algorithm "understood" the logic and will follow that specific logic and have consistent performance.

slightwinder 2 days ago | parent [-]

> understanding in a reasoning sense means classifying the thing and developing a deterministic algorithm to process it.

That's the learning-part I was talking about. Which is mainly supported by humans at the moment, which why I called it proto-intelligence.

> If you don't have a deterministic algorithm to process it, it isn't understanding.

Commercial AIs like ChatGPT do have the ability to call programs and integrate the result in their processing. Those AIs are not really just LLMs. The results are still rough and poor, but the concept is there and growing.

notepad0x90 2 days ago | parent [-]

> That's the learning-part I was talking about. Which is mainly supported by humans at the moment, which why I called it proto-intelligence.

Maybe it's just semantics, but I don't think LLMs even come close to a fruit fly's intelligence. Why can't we recognize and accept them for what they are, really powerful classifiers of data.

> Commercial AIs like ChatGPT do have the ability to call programs and integrate the result in their processing. Those AIs are not really just LLMs. The results are still rough and poor, but the concept is there and growing.

Yeah RAG and all of that, but those programs use deterministic algorithms. Now, if LLMs generated programs they call on as tools, that would be much more like the proto-intelligence you're talking about.

Semantics are boring, but it's important that we're not content or celebrate early by calling it what it isn't.

staticman2 2 days ago | parent | prev | next [-]

>>> We also just brute force "autocompletion"

Wouldn't be an A.I. discussion without a bizarre, untrue claim that the human brain works identically.

Workaccount2 2 days ago | parent | next [-]

There are no true and untrue claims about how the brain works, because we have no idea how it works.

The reason people give that humans are not auto-complete is "Obviously I am not an autocomplete"

Meanwhile, people are just a black box process that output words into their head, which they then take credit for, and calling it cognition. We have no idea how that black box that serves up a word when I say "Think of a car brand" works.

ToucanLoucan 2 days ago | parent | next [-]

> because we have no idea how it works

Flagrantly, ridiculously untrue. We don't know the precise nuts and bolts regarding the emergence of consciousness and the ability to reason, that's fair, but different structures of the brain have been directly linked to different functions and have been observed in operation on patients being stimulated in various ways with machinery attached to them reading levels of neuro-activity in the brain, and in specific regions. We know which parts handle our visual acuity and sense of hearing, and even cooler, we can watch those same regions light up when we use our "minds eye" to imagine things or engage in self-talk, completely silent speech that nevertheless engages our verbal center, which is also engaged by the act of handwriting and typing.

In short: no, we don't have the WHOLE answer. But to say that we have no idea is categorically ridiculous.

As to the notion of LLMs doing similarly: no. They are trained on millions of texts of various sources of humans doing thinking aloud, and that is what you're seeing: a probabilistic read of millions if not billions of documents, written by humans, selected by the machine to "minimize error." And crucially, it can't minimize it 100%. Whatever philosophical points you'd like to raise about intelligence or thinking, I don't think we would ever be willing to call someone intelligent if they just made something up in response to your query, because they think you really want it to be real, even when it isn't. Which points to the overall charade: it wants to LOOK intelligent, while not BEING intelligent, because that's what the engineers who built it wanted it to do.

lkey 2 days ago | parent | prev | next [-]

Accepting as true "We don't know how the brain works in a precise way" does not mean that obviously untrue statements about the human brain cannot still be made. Your brain specifically, however, is an electric rat that pulls on levers of flesh while yearning for a taste of God's holiest cheddar. You might reply, "no! that cannot be!", but my statement isn't untrue, so it goes.

staticman2 2 days ago | parent | prev | next [-]

>>>There are no true and untrue claims about how the brain works, because we have no idea how it works.

Which is why if you pick up a neuroscience textbook it's 400 pages of blank white pages, correct?

There are different levels of understanding.

I don't need to know how a TV works to know there aren't little men and women acting out the TV shows when I put them on.

I don't need to know how the brain works in detail to know claims that humans are doing the same things as LLMs to be similarly silly.

solumunus 2 days ago | parent | next [-]

The trouble is that no one knows enough about how the brain works to refute that claim.

staticman2 2 days ago | parent [-]

There's no serious claim that needs refuting.

I don't think any serious person thinks LLMs work like the human brain.

People claiming this online aren't going around murdering their spouses like you'd delete an old LLama model from your hard drive.

I'm not sure why people keep posting these sorts of claims they can't possibly actually believe if we look at their demonstrable real life behavior.

solumunus 2 days ago | parent [-]

We’re obviously more advanced than an LLM, but to claim that human beings simply generate output based on inputs and context (environment, life experience) is not silly.

> People claiming this online aren't going around murdering their spouses like you'd delete an old LLama model from your hard drive.

Not sure what you’re trying to say here.

staticman2 2 days ago | parent [-]

I'm saying you'd object to being treated like an LLM and don't really have conviction when you make these claims.

I'd also say stringing together A.I. buzzwords (input output) to describe humans isn't really an argument so much as what philosophers call a category error.

solumunus 2 days ago | parent [-]

That I wouldn’t treat a human like an LLM is completely irrelevant to the topic.

Input and output are not AI buzzwords, they’re fundamental terms in computation. The argument that human beings are computational has been alive in philosophy since the 1940’s brother…

naasking 2 days ago | parent | prev [-]

> I don't need to know how the brain works in detail to know claims that humans are doing the same things as LLMs to be similarly silly.

Yes you do. It's all computation in the end, and isomorphisms can often be surprising.

solumunus 2 days ago | parent | prev [-]

Our output is quite literally the sum of our hardware (genetics) and input (immediate environment and history). For anyone who agrees that free will is nonsense, the debate is already over, we’re nothing more than output generating biological machines.

slightwinder 2 days ago | parent | prev [-]

Similar, not identical. Like a bicycle and car are both vehicles with tires, but are still not identical vessels.

hitarpetar 2 days ago | parent | prev | next [-]

> We also just brute force "autocompletion" with our learned knowledge and combine it to new parts, which we then add to our learned knowledge to deepen the process

you know this because you're a cognitive scientist right? or because this is the consensus in the field?

Psyladine 2 days ago | parent | prev [-]

>Neither can a toddler nor an animal. The level of ability is irrelevant for evaluating its foundation.

Its foundation of rational logical thought that can't process basic math? Even a toddler understands 2 is more than 1.