| ▲ | rocqua 5 hours ago | ||||||||||||||||||||||
When you put an LLM in reasoning mode, it will approximately have a conversation with itself. This mimics an inner monologue. That conversation is held in text, not in any internal representation. That text is called the reasoning trace. You can then analyse that trace. | |||||||||||||||||||||||
| ▲ | bandrami 5 hours ago | parent [-] | ||||||||||||||||||||||
Unless things have changed drastically in the last 4 months (the last time I looked at it) those traces are not stored but reconstructed when asked. Which is still the same problem. | |||||||||||||||||||||||
| |||||||||||||||||||||||