▲ | antonvs 14 hours ago | |
> LLMs don't have enough of a model of the world to understand anything. This is binary thinking, and it's fallacious. For your orbital mechanics example, sure, it's difficult for LLMs to develop good models of the physical world, in large part because they aren't able to interact with the world directly and have to rely on human texts to describe it to them. For your software development example, you're making a similar mistake: the fact that their strongest suit is not producing fully working systems doesn't mean that they have no world model, or that their successes are as random as you seem to think ("Sometimes you get lucky, sometimes you don't," "sometimes working code falls out.") But if you try, for example, asking an LLM to identify a bug in a program, or ask it questions about how a program works, you'll find that from a functional perspective, they exhibit excellent understanding that strongly implies a good world model. You may be taking your own thought processes for granted too much to realize how good they are at this. The idea that "there's no abstracted model of software development as a process in there" is hard to reconcile with the often superhuman responses they're capable of, when you use them in the scenarios they're most effective at. |