Remix.run Logo
usefulposter 9 hours ago

This would make a great blogpost.

>I'm always going to be paranoid that I miss some opt-out somewhere

FYI, Anthropic's recent policy change used some insidious dark patterns to opt existing Claude Code users in to data sharing.

https://news.ycombinator.com/item?id=46553429

>whatever vague weasel-words the lawyers made you put in the terms of service

At any large firm, product and legal work in concert to achieve the goal (training data); they know what they can get away with.

simonw 8 hours ago | parent [-]

I often think suspect that the goal isn't exclusively training data so much as it's the freedom to do things that they haven't thought of in the future.

Imagine you come up with non-vague consumer terms for your product that perfectly match your current needs as a business. Everyone agrees to them and is happy.

And then OpenAI discover some new training technique which shows incredible results but relies on a tiny slither of unimportant data that you've just cut yourself off from!

So I get why companies want terms that sound friendly but keep their options open for future unanticipated needs. It's sensible from a business perspective, but it sucks as someone who is frequently asked questions about how safe it is to sign up as a customer of these companies, because I can't provide credible answers.