▲ | astrange 4 days ago | |
> Roughly, actual intelligence needs to maintain a world model in its internal representation This is GOFAI metaphor-based development, which never once produced anything useful. They just sat around saying things like "people have world models" and then decided if they programmed something and called it a "world model" they'd get intelligence, it didn't work out, but then they still just went around claiming people have "world models" as if they hadn't just made it up. An alternative thesis "people do things that worked the last time they did them" explains both language and action planning better; eg you don't form a model of the contents of your garbage in order to take it to the dumpster. https://www.cambridge.org/core/books/abs/computation-and-hum... | ||
▲ | sixo 4 days ago | parent [-] | |
I see no reason to believe an effective LLM-scale "world-modeling" model would look anything like the kinds of things previous generations of AI researchers were doing. It will probably look a lot more like a transformer architecture--big and compute intensive and with a fairly simple structure--but with a learning process which is different in some key way that make different manifold structures fall out. |