Remix.run Logo
fennecfoxy 8 days ago

I think what your quote is trying to say essentially boils down to: LLMs can be given facts in the context, we _hope_ that the statistical model picks up on that information/tool calls but it isn't _guaranteed_.

Unlike human beings such as yourself (presumably), LLMs do not have agency, they do not have conscious or active thought. All they do is predict the next token.

I've thought about the above a lot, these models are certainly capable of a lot, but they do not in any form or fashion emulate the consciousness that we have. Not yet.