▲ | danans 12 hours ago | |
> Bro. Evolution is random walk. That means most of the changes are random and arbitrary based on whatever allows the squirrel to survive. For the vast majority of evolutionary history, very similar forces have shaped us and squirrels. The mutations are random, but the selections are not. If squirrels are a stretch for you, take the closest human relative: chimpanzees. There is a very reasonable hypothesis that their brains work very similarly to ours, far more similarly than ours to an LLM. > So an LLM cant continuously learn? You realize that LLMs are deployed agentically all the time now so they both continuously learn and follow goals? That is not continuous learning. The network does not retrain through that process. It's all in the agent's context. The agent has no intrinsic goals nor ability to develop them. It merely samples based on it's prior training and it's current content. It doesn't retrain through this process. Biological intelligence does retrain constantly. > The energy efficiency is a byproduct of hardware. The theory of LLMs and machine learning is independent from the flawed silicon technology that is causing the energy efficiencies. There is no evidence to support that a transformer model's inefficiency is hardware based. There is direct evidence to support that the inefficiency is influenced by the fact that LLM inference and training are both auto-regressive. Auto-regression maps to compute cycles maps to energy consumption. That's a problem with the algorithm, not the hardware. > The fact that training transformers on massive amounts of data produced this level of intelligence was a total surprise for all the experts The level of intelligence produced is only impressive compared to the prior state of the art, and at its impressive modeling the narrow band of intelligence represented by encoded language (not all language) produced by humans. In most every other aspect of intelligence - notably continuous learning driven by intrinsic goals - LLMs fail. |