Remix.run Logo
aklein 3 hours ago

This article highlights how experts disagree on the meaning of (non-human) intelligence, but it dismisses the core problem a bit too quickly imo -

“LLMs only predict what a human would say, rather than predicting the actual consequences of an action or engaging with the real world. This is the core deficiency: intelligence requires not just mimicking patterns, but acting, observing real outcomes, and adjusting behavior based on those outcomes — a cycle Sutton sees as central to reinforcement learning.” [1]

An LLM itself is a form of crystallized intelligence, but it does not learn and adapt without a human driver, and that to me is a key component of intelligent behavior.

[1] https://medium.com/@sulbha.jindal/richard-suttons-challenge-...