| ▲ | grey-area 3 days ago |
| We have not yet entered the AI age, though I believe we will. LLMs are not AI. Machine learning is more useful. Perhaps they will evolve or perhaps they will prove a dead end. |
|
| ▲ | bheadmaster 3 days ago | parent | next [-] |
| > LLMs are not AI. Machine learning is more useful. LLMs are a particular application of machine learning, and as such LLMs both benefit by and contribute to general machine learning techniques. I agree that LLMs are not the AI we all imagine, but the fact that it broke a huge milestone is a big deal - natural language used to be one of the metrics of AGI! I believe it is only a matter of time until we get to a multi-sensory self-modifying large models which can both understand and learn from all five of human senses, and maybe even some of the senses we have no access to. |
| |
| ▲ | pyzhianov 3 days ago | parent | next [-] | | > natural language used to be one of the metrics of AGI what if we have chosen a wrong metric there? | | |
| ▲ | 1718627440 3 days ago | parent [-] | | I don't think we have. Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that. | | |
| ▲ | bheadmaster 3 days ago | parent [-] | | > Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that. But they do close a big gap - they're capable of "understanding" fuzzy ill-defined sentences and "infer" the context, insofar as they can help formalize it into a format parsable by another system. | | |
| ▲ | skydhash 3 days ago | parent | next [-] | | The technique itself is good. And paired with a good amount of data and loads with training time, it’s quite capable of extending prompts in a plausible way. But that’s it. Nothing here has justified the huge amount of money that are still being invested here. It’s nowhere near useful as mainframes computing or as attractive as mobile phones. | |
| ▲ | grey-area 2 days ago | parent | prev [-] | | They do not understand. They predict a plausible next sequence of words. | | |
| ▲ | bheadmaster 2 days ago | parent [-] | | I don't disagree with the conclusion, I disagree with the reasoning. There's no reason to assume that models trained to predict a plausible next sequence of tokens wouldn't eventually develop "understanding" if it was the most efficient way to predict them. |
|
|
|
| |
| ▲ | grey-area 3 days ago | parent | prev [-] | | LLMs have shown no signs of understanding. |
|
|
| ▲ | anonzzzies 3 days ago | parent | prev [-] |
| We keep moving the goalposts... |
| |
| ▲ | Illniyar 3 days ago | parent | next [-] | | The goal remains the same - AGI is what we see in sci-fi movies. An infallible human like intelligence that has access to infinite knowledge, can navigate it without fail and is capable of performing any digital action a human can. What changed is how we measure progress. This is common in the tech world - some times your KPIs become their own goal, and you must design new KPIs. Obviously NLP was not a good enough predictor of progress towards AGI and we must find a better metric. | |
| ▲ | econ 3 days ago | parent | prev [-] | | Maybe it is linear enough to figure out where the goalposts will be 10, 20, 50 years from now. |
|