Remix.run Logo
naasking 4 hours ago

I think it's correct to say that LLM have word models, and given words are correlated with the world, they also have degenerate world models, just with lots of inconsistencies and holes. Tokenization issues aside, LLMs will likely also have some limitations due to this. Multimodality should address many of these holes.

swyx 3 hours ago | parent | next [-]

(editor here) yes, a central nuance i try to communicate is not that LLMs cannot have world models (and in fact they've improved a lot) - it is just that they are doing this so inefficiently as to be impractical for scaling - we'd have to scale them up to so many more trillions of parameters more whereas our human brains are capable of very good multiplayer adversarial world models on 20W of power and 100T neurons.

naasking an hour ago | parent [-]

I agree LLMs are inefficient, but I don't think they are as inefficient as you imply. Human brains use a lot less power sure, but they're also a lot slower and worse at parallelism. An LLM can write an essay in a few minutes that would take a human days. If you aggregate all the power used by the human you're looking at kWh, much higher than the LLM used (an order of magnitude higher or more). And this doesn't even consider batch parallelism, which can further reduce power use per request.

But I do think that there is further underlying structure that can be exploited. A lot of recent work on geometric and latent interpretations of reasoning, geometric approaches to accelerate grokking, and as linear replacements for attention are promising directions, and multimodal training will further improve semantic synthesis.

AreShoesFeet000 4 hours ago | parent | prev | next [-]

So you think that enough of the complexity of the universe we live in is faithfully represented in the products of language and culture?

People won’t even admit their sexual desires to themselves and yet they keep shaping the world. Can ChatGPT access that information somehow?

D-Machine 4 hours ago | parent | next [-]

The amount of faith a person has in LLMs getting us to e.g. AGI is a good implicit test of how much a person (incorrectly) thinks most thinking is linguistic (and to some degree, conscious).

Or at least, this is the case if we mean LLM in the classic sense, where the "language" in the middle L refers to natural language. Also note GP carefully mentioned the importance of multimodality, which, if you include e.g. images, audio, and video in this, starts to look like much closer to the majority of the same kinds of inputs humans learn from. LLMs can't go too far, for sure, but VLMs could conceivably go much, much farther.

red75prime 3 hours ago | parent [-]

And the latest large models are predominantly LMMs (large multimodal models).

D-Machine 2 hours ago | parent [-]

Sort of, but the images, video, and audio they have available are far more limited in range and depth than the textual sources, and it also isn't clear that most LLM textual outputs are actually drawing too much on anything learned from these other modalities. Most of the VLM setups are the other way around, using textual information to augment their vision capacities, and even further, most mostly aren't truly multi-modal, but just have different backbones to handle the different modalities, or are even just models that are switched between with a broader dispatch model. There are exceptions, of course, but it is still today an accurate generalization that the multimodality of these models is kind of one-way and limited at this point.

So right now the limitation is that an LMM is probably not trained on any images or audio that is going to be helpful for stuff outside specific tasks. E.g. I'm sure years of recorded customer service calls might make LMMs good at replacing a lot of call-centre work, but the relative absence of e.g. unedited videos of people cooking is going to mean that LLMs just fall back to mostly text when it comes to providing cooking advice (and this is why they so often fail here).

But yes, that's why the modality caveat is so important. We're still nowhere close to the ceiling for LMMs.

throw310822 3 hours ago | parent | prev | next [-]

> you think that enough of the complexity of the universe we live in is faithfully represented in the products of language and culture?

Absolutely. There is only one model that can consistently produce novel sentences that aren't absurd, and that is a world model.

> People won’t even admit their sexual desires to themselves and yet they keep shaping the world

How do you know about other people's sexual desires then, if not through language? (excluding a very limited first hand experience)

red75prime 3 hours ago | parent | prev | next [-]

> Can ChatGPT access that information somehow?

Sure. Just like any other information. The system makes a prediction. If the prediction does not use sexual desires as a factor, it's more likely to be wrong. Backpropagation deals with it.

naasking an hour ago | parent | prev [-]

> So you think that enough of the complexity of the universe we live in is faithfully represented in the products of language and culture?

Math is language, and we've modelled a lot of the universe with math. I think there's still a lot of synthesis needed to bridge visual, auditory and linguistic modalities though.

D-Machine 4 hours ago | parent | prev [-]

It's also important to handle cases where the word patterns (or token patterns, rather) have a negative correlation with the patterns in reality. There are some domains where the majority of content on the internet is actually just wrong, or where different approaches lead to contradictory conclusions.

E.g. syllogistic arguments based on linguistic semantics can lead you deeply astray if you those arguments don't properly measure and quantify at each step.

I ran into this in a somewhat trivial case recently, trying to get ChatGPT to tell me if washing mushrooms ever really actually matters practically in cooking (anyone who cooks and has tested knows, in fact, a quick wash has basically no impact ever for any conceivable cooking method, except if you wash e.g. after cutting and are immediately serving them raw).

Until I forced it to cite respectable sources, it just repeated the usual (false) advice about not washing (i.e. most of the training data is wrong and repeats a myth), and it even gave absolute nonsense arguments about water percentages and thermal energy required for evaporating even small amounts of surface water as pushback (i.e. using theory that just isn't relevant when you actually properly quantify). It also made up stuff about surface moisture interfering with breading (when all competent breading has a dredging step that actually won't work if the surface is bone dry anyway...), and only after a lot of prompts and demands to only make claims supported by reputable sources, did it finally find McGee's and Kenji Lopez's actual empirical tests showing that it just doesn't matter practically.

So because the training data is utterly polluted for cooking, and since it has no ACTUAL understanding or model of how things in cooking actually work, and since physics and chemistry are actually not very useful when it comes to the messy reality of cooking, LLMs really fail quite horribly at producing useful info for cooking.