| ▲ | astrange 3 days ago | |
LLMs are only capable of thinking out loud, so in some sense this part of the answer is helping to convince it that it's answering a good question. Same reason for the "That's not X, it's Y" construct. It actually needs to say that. (Some exceptions for reasoning models.) | ||