Remix.run Logo
svara 6 days ago

Insightful and thanks for the comment, but I'm not sure I'm getting to the same conclusion as you. I think I lost you at:

> It's not just that all models are wrong and some are useful but that many models are useful but wrong. What used to be considered edge cases do not stay ...

That's not a contradiction? That popular quote says it right there: "all models are wrong". There is no model of reality, but there's a process for refining models that generates models that enable increasingly good predictions.

It stands to reason that an ideal next-token predictor would require an internal model of the world at last equally as powerful as our currently most powerful scientific theories. It also stands to reason that this model can, in principle, be trained from raw observational data, because that's how we did it.

And conversely, it stands to reason that a next-token predictor as powerful as the current crop of LLMs contains models of the world substantially more powerful than the models that powered what we used to call autocorrect.

Do you disagree with that?

godelski 6 days ago | parent [-]

  > That's not a contradiction?
Correct. No contradiction was intended. As you quote, I wrote "It's not just that". This is not setting up a contrasting point, this is setting up a point that follows. Which, as you point out, does follow. So let me rephrase

  > If all models are wrong but some are useful then this similarly means that all useful models are wrong in some way.
Why flip it around? To highlight the part where they are incorrect as this is what is the thesis of my argument.

With that part I do not disagree.

  > It stands to reason that an ideal next-token predictor would require an internal model of the world at last equally as powerful as our currently most powerful scientific theories.
With this part do not agree. There's not only the strong evidence I previously mentioned that demonstrates this happening in history, but we can even see the LLMs doing it today. We can see them become very good predictors yet the world that they model for is significantly different from the one we live in. Here's two papers studying exactly that![0,1]

To help make this clear, we really need to understand that you can't have a "perfect" next-token predictor (or any model). To "perfectly" generate the next token would require infinite time, energy, and information. You can look at this through the point of view as the Bekenstein bound[2], the Data Processing Inequality theorem[3], or even the No Free Lunch Theorem[4]. While I say you can't make a "perfect" predictor, that doesn't mean you can't get 100% accuracy on some test set. That is a localization, but as those papers show, one doesn't need to have an accurate world model to get such high accuracies. And as history shows, we don't only make similar mistakes but (this is not a contradiction, rather it follows the previous statement) we are resistant to updating our model. And for good reason! Because it is hard to differentiate models which make accurate predictions.

I don't think you realize you're making some jumps in logic. Which I totally understand, they are subtle. But I think you will find them if you get really nitpicky with your argument making sure that one thing follows from another. Make sure to define everything: e.g. next-token predictor, a prediction, internal model, powerful, and most importantly how we did it.

Here's where your logic fails:

You are making the assumption that given some epsilon bound on accuracy, that there will only be one model which accurate to that bound. Or, in other words, there is only one model that makes perfect predictions so by decreasing model error we must converge to that model.

The problem with this is that there are an infinite number of models that make accurate predictions. As a trivial example, I'm going to redefine all addition operations. Instead of doing "a + b" we will now do "2 + a + b - 2". The operation is useless, but it will make accurate calculations for any a and b. There are much more convoluted ways to do this where it is non-obvious that this is happening.

When we get into the epsilon-bound issue, we have another issue. Let's assume the LLM makes as accurate predictions as humans. You have no guarantee that they fail in the same way. Actually, it would be preferable if the LLMs fail in a different way than humans, as the combined efforts would then allow for a reduction of error that neither of us could achieve.

And remember, I only made the claim that you can't prove something correct simply through testing. That is, empirical evidence. Bekenstein's Bound says just as much. I didn't say you can't prove something correct. Don't ignore the condition, it is incredibly important. You made the assumption that we "did it" through "raw observational data" alone. We did not. It was an insufficient condition for us, and that's my entire point.

[0] https://arxiv.org/abs/2507.06952

[1] https://arxiv.org/abs/2406.03689

[2] https://en.wikipedia.org/wiki/Bekenstein_bound

[3] https://en.wikipedia.org/wiki/Data_processing_inequality

[4] https://en.wikipedia.org/wiki/No_free_lunch_theorem