| ▲ | bandrami 5 hours ago | |||||||||||||||||||||||||||||||
But it's not a reasoning trace. Models could produce one if they were designed to (an actual stack of the calls and the states of the tensors with each call, probably with a helpful lookup table for the tokens) but they specifically haven't been made to do that. | ||||||||||||||||||||||||||||||||
| ▲ | rocqua 5 hours ago | parent [-] | |||||||||||||||||||||||||||||||
When you put an LLM in reasoning mode, it will approximately have a conversation with itself. This mimics an inner monologue. That conversation is held in text, not in any internal representation. That text is called the reasoning trace. You can then analyse that trace. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||