▲ | tauwauwau 17 hours ago | |
> but the LLM is not sensing actual photons, nor experiencing actual light cone stimulation Neither is animal brain. It's processing the signals produced by the sensors. Once the world model is programmed/auto-built in the brain, it doesn't matter if it's sensing real photons, it just has input pins like a transistor or arguments of a function. As long as we provide the arguments, it doesn't matter how those arguments are produced. LLMs are not different in that aspect. > nor generating thoughts They do during the chain-of-thought process. Generally there's no incentive to let an LLM keep mulling over a topic as that is not useful to the humans and they make money only when their gears start turning in response to a question sent by a human. But that doesn't mean that LLM doesn't have capability to do that. > Its "world model" is several degrees removed from the real world. Just because animal brain has tools called sensors that it can get data from world without external stimuli, it doesn't mean that it's any closer to the world than an LLM. It's still getting ultra processed signals to feed to its own programming. Similarly, LLMs do interact with real world through tools as agent. > So whatever fragment of a model it gains through learning to compress that causal chain of events does not mean much when it cannot generate the actual causal chain. Again, a person who has gone blind, still has the world model created by the sight. This person can also no longer generate the chain of events that led to creation of that sight model. It still doesn't mean that this person's world model has become inferior. |