In the context of coding agents, they do simulate “reasoning” when you feed them the output and it is able to correct itself.