Remix.run Logo
slightwinder 2 days ago

I'd say, they are all doing the same, just in different domains and level of quality. "Understanding the topic" only means they have specialized, deeper contextualized information. But at the end, that student also just autocompletes their memorized data, with the exception that some of that knowledge might trigger a program they execute to insert the result in their completion.

The actual work is in gaining the knowledge and programs, not in accessing and executing them. And how they operate, on which data, variables, objects, worldview or whatever you call it, this might make a difference in quality and building speed, but not for the process in general.

notepad0x90 2 days ago | parent [-]

> only means they have specialized, deeper contextualized information

no, LLMs can have that contextualized information. understanding in a reasoning sense means classifying the thing and developing a deterministic algorithm to process it. If you don't have a deterministic algorithm to process it, it isn't understanding. LLMs learn to approximate, we do that too, but then we develop algorithms to process input and generate output using a predefined logical process.

A sorting algorithm is a good example, when you compare that with an LLM sorting a list. they both may have correct outcome, but the sorting algorithm "understood" the logic and will follow that specific logic and have consistent performance.

slightwinder 2 days ago | parent [-]

> understanding in a reasoning sense means classifying the thing and developing a deterministic algorithm to process it.

That's the learning-part I was talking about. Which is mainly supported by humans at the moment, which why I called it proto-intelligence.

> If you don't have a deterministic algorithm to process it, it isn't understanding.

Commercial AIs like ChatGPT do have the ability to call programs and integrate the result in their processing. Those AIs are not really just LLMs. The results are still rough and poor, but the concept is there and growing.

notepad0x90 2 days ago | parent [-]

> That's the learning-part I was talking about. Which is mainly supported by humans at the moment, which why I called it proto-intelligence.

Maybe it's just semantics, but I don't think LLMs even come close to a fruit fly's intelligence. Why can't we recognize and accept them for what they are, really powerful classifiers of data.

> Commercial AIs like ChatGPT do have the ability to call programs and integrate the result in their processing. Those AIs are not really just LLMs. The results are still rough and poor, but the concept is there and growing.

Yeah RAG and all of that, but those programs use deterministic algorithms. Now, if LLMs generated programs they call on as tools, that would be much more like the proto-intelligence you're talking about.

Semantics are boring, but it's important that we're not content or celebrate early by calling it what it isn't.