| ▲ | da_chicken 4 days ago |
| The issue is one that's been stated here before: LLMs are language models. They are not world models. They are not problem models. They do not actually understand world or the underlying entities represented by language, or the problems being addressed. LLMs understand the shape of a correct answer, and how the components of language fit together to form a correct answer. They do that because they have seen enough language to know what correct answers look like. In human terms, we would call that knowing how to bullshit. But just like a college student hitting junior year, sooner or later you'll learn that bullshitting only gets you so far. That's what we've really done. We've taught computers how to bullshit. We've also managed to finally invent something that lets us communicate relatively directly with a computer using human languages. The language processing capabilities of an LLM are an astonishing multi-generational leap. These types of models will absolutely be the foundation for computing interfaces in the future. But they're still language models. To me it feels like we've invented a new keyboard, and people are fascinated by the stories the thing produces. |
|
| ▲ | rbranson 4 days ago | parent | next [-] |
| Is it bullshitting to perform nearly perfect language to language translation or to generate photorealistic depictions from text quite reliably? or to reliably perform named entity extraction or any of the other millions of real-world tasks LLMs already perform quite well? |
| |
| ▲ | da_chicken 4 days ago | parent [-] | | Picking another task like translation which doesn't really require any knowledge outside of language processing is not a particularly good way to convince me that LLMs are doing anything other than language processing. Additionally, "near perfect" is a bit overselling it, IMX, given that they still struggle with idioms and cultural expressions. Image generation is a bit better, except it's still not really aware of what the picture is, either. It's aware of what images are described as by others, let alone the truth of the generated image. It makes pictures of dragons quite well, but if you ask it for a contour map of a region, is it going to represent it accurately? It's not concerned about truth, it's concerned about truthiness or the appearance of truth. We know when that distinction is important. It doesn't. |
|
|
| ▲ | adastra22 4 days ago | parent | prev | next [-] |
| I could make the exact same argument about the activation loops happening in your brain when you typed this out. Transformer architectures are not replicas of human brain architrcture, but they are not categorically different either. |
| |
| ▲ | nonameiguess 3 days ago | parent [-] | | You could, but it would be a wrong argument. Animals, and presumably early enough humans for at least a while, had no language, yet still manage to interact with and understand the world. We don't learn solely by reading with ingestion of text being our only experience of anything. For what it's worth, this is not some kind of slam dunk fundamental permanent limitation of AI that is constructed from LLMs. Multi-modal learning gets you part of the way. Tool use gets you more of the way, enabling interaction with at least something. Embodiment and autonomy would get you further, but at some point, you need a true online learner that can update its own weights and not just simulate that with a very large context window. Whether or not this entails any limitation in capability (as in, there is anything a human or animal can actually do cognitively that a LLM AI can't) is an open question, but it is a real difference. The person you're responding to, no matter how similar their activation loops may be to software, didn't develop all behaviors and predictive modeling they currently do by reading and then had their brain frozen in time. |
|
|
| ▲ | loglog 4 days ago | parent | prev | next [-] |
| Bullshitting generally gets people farther than anything else. |
|
| ▲ | zoom6628 4 days ago | parent | prev [-] |
| THIS ! |