Remix.run Logo
yogthos 4 hours ago

I'd argue you can have much more precise definition than that. My definition of intelligence would be a system that has an internal of a particular domain, and it uses this simulation to guide its actions within that domain. Being able to explain your actions is derived directly from having a model of the environment.

For example, we all have an internal physics model in our heads that's build up through our continuous interaction with our environment. That acts as our shared context. That's why if I tell you to bring me a cup of tea, I have a reasonable expectation that you understand what I requested and can execute this action intelligently. You have a conception of a table, of a cup, of tea, and critically our conception is similar enough that we can both be reasonably sure we understand each other.

Incidentally, when humans end up talking about abstract topics, they often run into exact same problem as LLMs, where the context is missing and we can be talking past each other.

The key problem with LLMs is that they currently lack this reinforcement loop. The system merely strings tokens together in a statistically likely fashion, but it doesn't really have a model of the domain it's working in to anchor them to.

In my opinion, stuff like agentic coding or embodiment with robotics moves us towards genuine intelligence. Here we have AI systems that have to interact with the world, and they get feedback on when they do things wrong, so they can adjust their behavior based on that.