| ▲ | varispeed 6 hours ago |
| There is nothing smart about current LLMs. They just regurgitate text compressed in their memory based on probability.
None of the LLMs currently have actual understanding of what you ask them to do and what they respond with. |
|
| ▲ | bsenftner 5 hours ago | parent | next [-] |
| We know that, but that does not make them unuseful. The opposite in fact, they are extremely useful in the hands of non-idiots.We just happen to have a oversupply of idiots at the moment, which AI is here to eradicate. /Sort of satire. |
|
| ▲ | adamtaylor_13 4 hours ago | parent | prev | next [-] |
| If LLMs just regurgitate compressed text, they'd fail on any novel problem not in their training data. Yet, they routinely solve them, which means whatever's happening between input and output is more than retrieval, and calling it "not understanding" requires you to define understanding in a way that conveniently excludes everything except biological brains. |
| |
| ▲ | sfn42 2 hours ago | parent | next [-] | | Yes there are some fascinating emergent properties at play, but when they fail it's blatantly obvious that there's no actual intelligence nor understanding. They are very cool and very useful tools, I use them on a daily basis now and the way I can just paste a vague screenshot with some vague text and they get it and give a useful response blows my mind every time. But it's very clear that it's all just smoke and mirrors, they're not intelligent and you can't trust them with anything. | | |
| ▲ | pennomi 2 hours ago | parent [-] | | When humans fail a task, it’s obvious there is no actual intelligence nor understanding. Intelligence is not as cool as you think it is. | | |
| |
| ▲ | varispeed 4 hours ago | parent | prev [-] | | They don't solve novel problems. But if you have such strong belief, please give us examples. |
|
|
| ▲ | visarga 4 hours ago | parent | prev | next [-] |
| So you are saying they are like copy, LLMs will copy some training data back to you? Why do we spend so much money training and running them if they "just regurgitate text compressed in their memory based on probability"? billions of dollars to build a lossy grep. I think you are confused about LLMs - they take in context, and that context makes them generate new things, for existing things we have cp. By your logic pianos can't be creative instruments because they just produce the same 88 notes. |
|
| ▲ | small_model 6 hours ago | parent | prev | next [-] |
| Thats not how they work, pro-tip maybe don't comment until you have a good understanding? |
| |
| ▲ | fyltr 6 hours ago | parent | next [-] | | Would you mind rectifying the wrong parts then? | | |
| ▲ | retsibsi 5 hours ago | parent | next [-] | | Phrases like "actual understanding", "true intelligence" etc. are not conducive to productive discussion unless you take the trouble to define what you mean by them (which ~nobody ever does). They're highly ambiguous and it's never clear what specific claims they do or don't imply when used by any given person. But I think this specific claim is clearly wrong, if taken at face value: > They just regurgitate text compressed in their memory They're clearly capable of producing novel utterances, so they can't just be doing that. (Unless we're dealing with a very loose definition of "regurgitate", in which case it's probably best to use a different word if we want to understand each other.) | |
| ▲ | mhl47 5 hours ago | parent | prev [-] | | The fact that the outputs are probabilities is not important. What is important is how that output is computed. You could imagine that it is possible to learn certain algorithms/ heuristics that "intelligence" is comprised of. No matter what you output. Training for optimal compression of tasks /taking actions -> could lead to intelligence being the best solution. This is far from a formal argument but so is the stubborn reiteration off "it's just probabilities" or "it's just compression". Because this "just" thing is getting more an more capable of solving tasks that are surely not in the training data exactly like this. |
| |
| ▲ | 100721 6 hours ago | parent | prev [-] | | Huh? Their words are an accurate, if simplified, description of how they work. |
|
|
| ▲ | beyondCritics 5 hours ago | parent | prev [-] |
| Just HI slop. Ask any decent model, it can explain what's wrong this this description. |