Remix.run Logo
bheadmaster 3 days ago

> LLMs are not AI. Machine learning is more useful.

LLMs are a particular application of machine learning, and as such LLMs both benefit by and contribute to general machine learning techniques.

I agree that LLMs are not the AI we all imagine, but the fact that it broke a huge milestone is a big deal - natural language used to be one of the metrics of AGI!

I believe it is only a matter of time until we get to a multi-sensory self-modifying large models which can both understand and learn from all five of human senses, and maybe even some of the senses we have no access to.

pyzhianov 3 days ago | parent | next [-]

> natural language used to be one of the metrics of AGI

what if we have chosen a wrong metric there?

1718627440 3 days ago | parent [-]

I don't think we have. Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that.

bheadmaster 3 days ago | parent [-]

> Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that.

But they do close a big gap - they're capable of "understanding" fuzzy ill-defined sentences and "infer" the context, insofar as they can help formalize it into a format parsable by another system.

skydhash 3 days ago | parent | next [-]

The technique itself is good. And paired with a good amount of data and loads with training time, it’s quite capable of extending prompts in a plausible way.

But that’s it. Nothing here has justified the huge amount of money that are still being invested here. It’s nowhere near useful as mainframes computing or as attractive as mobile phones.

grey-area 2 days ago | parent | prev [-]

They do not understand. They predict a plausible next sequence of words.

bheadmaster 2 days ago | parent [-]

I don't disagree with the conclusion, I disagree with the reasoning.

There's no reason to assume that models trained to predict a plausible next sequence of tokens wouldn't eventually develop "understanding" if it was the most efficient way to predict them.

grey-area 3 days ago | parent | prev [-]

LLMs have shown no signs of understanding.