| ▲ | simianwords 9 hours ago |
| > The obvious concern, raised repeatedly in the HN thread: this means AI-authored commits and PRs from Anthropic employees in open source projects will have no indication that an AI wrote them. It’s one thing to hide internal codenames. It’s another to have the AI actively pretend to be human. I don’t get it. What does this mean? I can use Claude code now without anyone knowing it is Claude code. |
|
| ▲ | alex000kim 9 hours ago | parent | next [-] |
| technically you're correct, but look at the prompt https://github.com/alex000kim/claude-code/blob/main/src/util... it's written to _actively_ avoid any signs of AI generated code when "in a PUBLIC/OPEN-SOURCE repository". Also, it's not about you. Undercover mode only activates for Anthropic employees (it's gated on USER_TYPE === 'ant', which is a build-time flag baked into internal builds). |
| |
| ▲ | simianwords 9 hours ago | parent [-] | | I don’t know what you mean. It just informs to not use internal code names. | | |
| ▲ | robflynn 9 hours ago | parent | next [-] | | It also says don't announce that you are AI in any way including asking it to not say "Co-authored by Claude". I read the file myself. I'm still inclined to think people might be overreacting to that bit since it seems to be for anthropic-only to prevent leaking internal info. But I did read the prompt and it did say hide the fact that you are AI. | | |
| ▲ | simianwords 9 hours ago | parent [-] | | Why does that matter though | | |
| ▲ | robflynn 7 hours ago | parent [-] | | There are probably different reasons for different people. I can definitely see the angle that trying to specifically pretend to not be AI when contributing to open source could be seen as a bad thing due to the open source supply chain attacks, some AI-driven, that we've been having, not to mention the AI-slop PR spam. But, I also get Anthropic's side that when they're contributing they don't want their internals leaked. If it had been left at that, that's fine, but having it pretend like it's not AI at all rubs me a little bit the wrong way. Why try to hide it? | | |
| ▲ | simianwords 7 hours ago | parent [-] | | >There are probably different reasons for different people. I can definitely see the angle that trying to specifically pretend to not be AI when contributing to open source could be seen as a bad thing due to the open source supply chain attacks, some AI-driven, that we've been having, not to mention the AI-slop PR spam. But none of the other agents advertise that the commit was done by an agent. Like Codex. Your panic should apply equally to already existing agents like Codex no? |
|
|
| |
| ▲ | giancarlostoro 9 hours ago | parent | prev [-] | | I agree with you, I think people are overthinking this. |
|
|
|
| ▲ | slopinthebag 9 hours ago | parent | prev | next [-] |
| I think it means OSS projects should start unilaterally banning submissions from people working for Anthropic. |
| |
| ▲ | simianwords 9 hours ago | parent [-] | | Why? What does this have to do with the leak | | |
| ▲ | daemin 3 hours ago | parent [-] | | Because it has a high likelyhood of being written completely by a LLM without any human thought or attention being put into it. Being written by a LLM is a signal that the submission is of low effort and therefore probably low quality, which then puts the onus on the people reviewing and reading the submission instead of the original generator of the submission. Hence I would classify it as spam. Open source communities also have rules against LLM generated contributions, for various moral, ethical, or legal reasons. |
|
|
|
| ▲ | hrmtst93837 7 hours ago | parent | prev [-] |
| If anybody cares about AI-written code slipping in they can grep for style tells or run a classifier against a suspect repo. You won't get guarantees. Watermarks and disclosure tags die the moment someone edits the patch, so secret strings and etiquette signs are cargo cult security and the only answer is review. |