| ▲ | dibujaron 5 hours ago | |
A less cynical explanation: It's heavily trained to ask follow-up questions at the end of a response, to drive more conversation and more engagement. That's useful both for making sure you want to renew your subscription, and also probably for generating more training data for future models. That's sufficient explanation for the behavior we're seeing. | ||
| ▲ | g947o 2 hours ago | parent | next [-] | |
I could be wrong, but I remember that Claude models didn't really ask follow-up questions. But since GPT models are doing that, and somehow people like that (why?), Anthropic started doing it as well. | ||
| ▲ | neya 2 hours ago | parent | prev [-] | |
Because, Anthropic can do no wrong, correct? | ||