When you do a chat are reasoning traces for prior model outputs in the LLM context?
No, they are normally stripped out.