▲ | maebert 7 days ago | |
It might be worth noting that humans also struggle with keeping up a coherent world model over time. Luckily, we don’t have to; we externalize a lot of our representations. When shopping together with a friend we might put our stuff on one side of the shopping cart and our friends’ on the other. There’s a reason we don’t just play chess in our heads but use a chess board. We use notebooks to write things down, etc. Some reasoning model can do similar things (keep a persistent notebook that gets fed back into the context window on every pass), but I expect that we need a few more dirty representational ist tricks to get there. In other words, I don’t think it’s an LLMs job to have a world model, but an LLM is just one part of an AI system. |