| ▲ | cma 6 hours ago | |||||||
Everyone using Claude code on a personal subscription is default opted in to getting their data trained on. Private troves of data like are seen to potentially end up in a winner take all scenario. More data, better models, attracts more users, results in more exclusive data (what Altman calls the data flywheel). | ||||||||
| ▲ | spenvo 6 hours ago | parent | next [-] | |||||||
PSA: this is true (the defaults), but there's a "Help improve Claude" setting that you can disable here https://claude.ai/settings/data-privacy-controls It's my understanding that, as long as this is off, Anthropic does not train on Claude Code conversations, inputs/outputs -- if anyone knows otherwise, please tell and provide a link if possible. | ||||||||
| ||||||||
| ▲ | johnbarron 5 hours ago | parent | prev [-] | |||||||
>> Everyone using Claude code on a personal subscription is default opted in to getting their data trained on This is completely not true if you use AWS Bedrock, and applies to both your private that or in a business context. Its one of their core arguments for the service use. [1] - "...At Amazon, we don’t use your prompts and outputs to train or improve the underlying models in Amazon Bedrock and SageMaker JumpStart (including those from third parties), and humans won’t review them. Also, we don’t share your data with third-party model providers. Your data remains private to you within your AWS accounts..." [1] - https://aws.amazon.com/blogs/security/securing-generative-ai... | ||||||||
| ||||||||