Remix.run Logo
neya 5 hours ago

I was a Pro subscriber until last week. When I was chatting with Claude, it kept asking a lot of personal questions - that seemed only very very vaguely relevant to the topic. And then it struck me - all these AI companies are doing are just building detailed user models for being either targeted for advertising or to be sold off to the highest bidder. It hasn't happened yet with Anthropic, but when the bubble money runs out, there's not gonna be a lot of options and all we'll see is a blog post "oops! sorry we did what we promised you we wouldn't". Oldest trick in the tech playbook.

dibujaron 5 hours ago | parent [-]

A less cynical explanation: It's heavily trained to ask follow-up questions at the end of a response, to drive more conversation and more engagement. That's useful both for making sure you want to renew your subscription, and also probably for generating more training data for future models. That's sufficient explanation for the behavior we're seeing.

g947o 2 hours ago | parent | next [-]

I could be wrong, but I remember that Claude models didn't really ask follow-up questions. But since GPT models are doing that, and somehow people like that (why?), Anthropic started doing it as well.

neya 2 hours ago | parent | prev [-]

Because, Anthropic can do no wrong, correct?