| ▲ | raw_anon_1111 18 hours ago | |||||||
How is believing that Microsoft is being honest about how they use your private GitHub code and they don’t use it to train Copilot any different than believing Anthropic if you opt out? Every company I listed is training models for their business - I’m not saying they are using your data. Any company that doesn’t have an enterprise contract with Anthropic and uses Claude Code is an idiot. But if you really want to have that warm and fuzzy, you can always use Claude Code via an AWS account and Bedrock hosted Anthropic models. I assure you that AWS (former employer) is not using your data when you use Claude with Bedrock/Anthropic to train their models. Amazon may be evil. But they are not stupid. | ||||||||
| ▲ | UqWBcuFx6NV4r 10 hours ago | parent [-] | |||||||
>Any company that doesn’t have an enterprise contract with Anthropic and uses Claude Code is an idiot. I understand that working for Amazon will have given you the typical unjustified sense of intelligence and authority, and entirely insular sense of the world, that people tend to have when they work for FAANG, but you need to do your best to fight against it, dude. You don’t know about every organisation. You don’t know about their risk profiles. Are you saying that the two-person bootstrapped spare-time side-project is the creation of two “idiots” because they don’t have an enterprise agreement with Anthropic? What about the organisations where the code is more-so incidental aspect of their organisation, rather than the secret sauce? You know that this is the vast, vast majority of organisations, right? Do you genuinely think that your code is so precious that anyone else having access to it (let alone munged up in an LLM) will be in any way detrimental to the business? That is very, very, very rarely the case. We’re all capable of reading ‘Designing Data-Intensive Applications’, I assure you. | ||||||||
| ||||||||