Wouldn't this affect quality of output negatively?
Thanks to chain of thought, actually having the LLM be explicit in its output allows it to have more quality.