Remix.run Logo
1718627440 3 days ago

I don't think we have. Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that.

bheadmaster 3 days ago | parent [-]

> Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that.

But they do close a big gap - they're capable of "understanding" fuzzy ill-defined sentences and "infer" the context, insofar as they can help formalize it into a format parsable by another system.

skydhash 3 days ago | parent | next [-]

The technique itself is good. And paired with a good amount of data and loads with training time, it’s quite capable of extending prompts in a plausible way.

But that’s it. Nothing here has justified the huge amount of money that are still being invested here. It’s nowhere near useful as mainframes computing or as attractive as mobile phones.

grey-area 2 days ago | parent | prev [-]

They do not understand. They predict a plausible next sequence of words.

bheadmaster 2 days ago | parent [-]

I don't disagree with the conclusion, I disagree with the reasoning.

There's no reason to assume that models trained to predict a plausible next sequence of tokens wouldn't eventually develop "understanding" if it was the most efficient way to predict them.