| ▲ | dragonwriter 3 days ago | |
> Humans don't require input to, say, decide to go for a walk. Impossible to falsify since humans are continuously receiving inputs from both external and internal sensors. > What's missing in the LLM is volition. What's missing is embodiment, or, at least, a continuous loop feeding a wide variety of inputs about the state of world. Given that, and info about of set of tools by which it can act in the world, I have no doubt that current LLMs would exhibit some kind (possibly not desirable or coherent, from a human POV, at least without a whole lot of prompt engineering) of volitional-seeming action. | ||