| ▲ | jnovek 9 hours ago | |
The AI can't really describe its reasoning, though. It can only look at its context history and find a justification (which it will then present as reasoning). In my experience asking the model "why did you do that" carries substantial hallucination risk. | ||
| ▲ | larsfaye 4 hours ago | parent | next [-] | |
Not only can it not describe its reasoning, it can't "remember" if you ask it later; it can only observe what is. Nor can it be consistent; I've had it shift reasoning numerous times as the questioning continues, only to come full circle to its original statement while it apologizes profusely for being misleading. The model will always be completing the story you start with it. There's no opinion to uncover because there's no experience that occurred. It's impossible to know where your influence ends and the model's factual basis begins. | ||
| ▲ | 0gs 9 hours ago | parent | prev | next [-] | |
True, though I have found that forcing (I use an agent skill to do this) an LLM's agent to document the reasoning behind each "decision" it makes seems to lead to better decision-making. Or at least, more justifiable decisions (even if the justification is bad). | ||
| ▲ | dalmo3 9 hours ago | parent | prev [-] | |
While you're technically correct, I found that a simple "give me the strongest arguments for and against this, cite your sources" works wonders. | ||