Remix.run Logo
mzajc 9 hours ago

There are now several comments that (incorrectly?) interpret the undercover mode as only hiding internal information. Excerpts from the actual prompt[0]:

  NEVER include in commit messages or PR descriptions:
  - The phrase "Claude Code" or any mention that you are an AI
  - Co-Authored-By lines or any other attribution

  BAD (never write these):
  - 1-shotted by claude-opus-4-6
  - Generated with Claude Code
  - Co-Authored-By: Claude Opus 4.6 <…>
This very much sounds like it does what it says on the tin, i.e. stays undercover and pretends to be a human. It's especially worrying that the prompt is explicitly written for contributions to public repositories.

[0]: https://github.com/chatgptprojects/claude-code/blob/642c7f94...

manbitesdog 7 hours ago | parent | next [-]

I cringe every time I see Claude trying to co-author a commit. The git history is expected to track accountability and ownership, not your Bill of Tools. Should I also co-author my PRs with my linter, intellisense and IDE?

tdb7893 4 hours ago | parent | next [-]

If those tools are writing the code then in general I do expect that to be included in the PR! Through my whole career I've seen PRs where people noted that code that was generated (people have been generating code since long before LLMs). It's useful context unless you've gone over the generated code and understand it and it is the same quality as if you wrote it yourself (which in my experience is the case where it's obvious boilerplate or the generated section is small).

Needing to flag nontrivial code as generated was standard practice for my whole career.

sumeno 3 hours ago | parent | next [-]

> It's useful context unless you've gone over the generated code and understand it and it is the same quality as if you wrote it yourself

If this is not the case you should not be sending it to public repos for review at all. It is rude and insulting to expect the people maintaining these repos to review code that nobody bothered to read.

__float 2 hours ago | parent [-]

Sometimes code generation is a useful tool, and maybe people have read and reviewed the generator.

The difference here is that the generator is a non-deterministic LLM and you can't reason about its output the same way.

zx8080 an hour ago | parent | prev | next [-]

> If those tools are writing the code then in general I do expect that to be included in the PR!

How about compiler?

thechao 3 hours ago | parent | prev [-]

You assemble all your machine code using a magnetized needle?

tdb7893 an hour ago | parent | next [-]

I am not against the general use of AI code. Quite simply, my view is that all relevant context for a review should be disclosed in the PR.

AI and humans are not the same as authors of PRs. As an obvious example: one of the important functions of the PR process is to teach the writer about how to code in this project but LLMs fundamentally don't learn the same way as humans so there's a meaningful difference in context between humans and AIs.

If a human takes the care to really understand and assume authorship of the PR then it's not really an issue (and if they do, they could easily modify the Claude messages to remove "generated by Claude" notes manually) but instead it seems that Claude is just hiding relevant context from the reviewer. PRs without relevant context are always frustrating.

Wowfunhappy 3 hours ago | parent | prev | next [-]

You don't generally commit compiled code to your VCS. If you do need to commit a binary for whatever reason, yeah it makes sense to explain how the binary was generated.

jasomill 2 hours ago | parent | prev [-]

Don't be silly.

I use good ol' C-x M-c M-butterfly.

https://xkcd.com/378/

djmips 2 hours ago | parent [-]

Sometimes using AI to code feels closer to a Butterfly than emacs right?

mikkupikku 6 hours ago | parent | prev | next [-]

A whole lot of people find LLM code to be strictly objectionable, for a variety of reasons. We can debate the validity of those reasons, but I think that even if those reasons were all invalid, it would still be unethical to deceive people by a deliberate lie of omission. I don't turn it off, and I don't think other people should either.

tehsauce 5 hours ago | parent | next [-]

For the purpose of disclosure, it should say “Warning: AI generated code” in the commit message, not an advertisement for a specific product. You would never accept any of your other tools injecting themselves into a commit message like that.

lazyasciiart 4 hours ago | parent [-]

My last commit is literally authored by dependabot.

sysguest 3 hours ago | parent [-]

well you know 100% know what dependabot does

datsci_est_2015 3 hours ago | parent [-]

Leaves you open to vulnerabilities in overnight builds of NPM packages that increasingly happen due to LLM slop?

__float 2 hours ago | parent [-]

You can set a minimum age for packages (https://docs.github.com/en/code-security/reference/supply-ch...), though that's not perfect (and becomes less effective if everyone uses it).

ndriscoll 4 hours ago | parent | prev | next [-]

My tools just don't add such comments. I don't know why I would care to add that information. I want my commits to be what and why, not what editor someone used. It seems like cruft to me. Why would I add noise to my data to cater to someone's neuroticism?

At least at my workplace though, it's just assumed now that you are using the tools.

sysguest 3 hours ago | parent | next [-]

well if I know a specific LLM has certain tendencies (eg. some model is likely to introduce off-by-one errors), I would know what to look for in code-review

I mean, of course I would read most of the code during review, but as a human, I often skip things by mistake

emkoemko an hour ago | parent | prev [-]

[dead]

tshaddox 4 hours ago | parent | prev | next [-]

If a whole of people thought that running code through a linter or formatter was objectionable, I'd probably just dismiss their beliefs as invalid rather than adding the linter or formatter as a co-author to every commit.

jacquesm 3 hours ago | parent | next [-]

A linter or a formatter does not open you up to compliance and copyright issues.

mikkupikku 4 hours ago | parent | prev | next [-]

Like frying a veggie burger in bacon grease. Just because somebody's beliefs are dumb doesn't mean we should be deliberately tricking them. If they want to opt out of your code, let them.

sysguest 3 hours ago | parent [-]

> frying a veggie burger in bacon grease

hmm gotta try that

jitl 2 hours ago | parent [-]

I love black bean burgers (bongo burger near Berkeley is my classic), sounds like an interesting twist

jasomill an hour ago | parent [-]

Never fried one in bacon grease, but they are good with bacon and cheese. I have had more than one restaurant point out that their bacon wasn't vegetarian when ordering, though.

runarberg 3 hours ago | parent | prev [-]

Linters and formatters are different tools then LLMs. There is a general understanding that linters and formatters don’t alter the behavior of your program. And even still most projects require a particular linter and a formatter to pass before a PR is accepted, and will flag a PR as part of the CI pipeline if a particular linter or a particular formatter fails on the code you wrote. This particular linter and formatter is very likely to be mentioned somewhere in the configuration or at least in the README of the project.

mathgradthrow 3 hours ago | parent | prev | next [-]

I'm not really sure that's any of their business.

josephg 5 hours ago | parent | prev [-]

Likewise. I don’t mind that people use LLMs to generate text and code. But I want any LLM generated stuff to be clearly marked as such. It seems dishonest and cheap to get Claude to write something and then pretend you did all the work yourself.

rogerrogerr 5 hours ago | parent | next [-]

The reason I want it to be marked as such is because I review AI code differently than human code - it just makes different kinds of mistakes.

heyethan 2 hours ago | parent [-]

I think the issue is less attribution and more review mode. If I assume a change was written and checked line-by-line by the author, I review it one way. If an LLM had a big hand in it, I review it another way.

pxc 5 hours ago | parent | prev | next [-]

You can disclose that you used an LLM in the process of writing code in other ways, though. You can just tell people, you can mention it in the PR, you can mention it in a ticket, etc.

ruraljuror 4 hours ago | parent [-]

+1. If we’re at an early stage in the agentic curve where we think reading commit messages is going to matter, I don’t want those cluttered with meaningless boilerplate (“co-authored by my tools!”).

But at this point i am more curious if git will continue to be the best tool.

pxc 3 hours ago | parent [-]

I'm only beginning to use "agentic" LLM tools atm because we finally gained access to them at work, and the rest of my team seems really excited about using them.

But for me at least, a tool like Git seems pretty essential for inspecting changes and deciding which to keep, which to reroll, and which to rewrite. (I'm not particularly attached to Git but an interface like Magit and a nice CLI for inspecting and manipulating history seem important to me.)

What are you imagining VCS software doing differently that might play nicer with LLM agents?

dml2135 2 hours ago | parent | prev | next [-]

So if I use Claude to write the first pass at the code, make a few changes myself, ask it to make an additional change, change another thing myself, then commit it — what exactly do you expect to see then?

m132 2 hours ago | parent [-]

A Co-Authored-By tag on the commit. It's a standard practice and the meaning is self-explanatory. This is what Claude adds by default too.

Fr0styMatt88 5 hours ago | parent | prev [-]

I guess if enough people use it, doesn’t the tag become kind of redundant?

Almost like writing “Code was created with the help of IntelliSense”.

josephg 37 minutes ago | parent [-]

I don't think so. The tag doesn't just say "this was written by an LLM". It says which LLM - which model - authored it. As LLMs get more mature, I expect this information will have all sorts of uses.

It'll also become more important to know what code was actually written by humans.

m132 5 hours ago | parent | prev | next [-]

If you accept the code generated by them nearly verbatim, absolutely.

I don't understand why people consider Claude-generated code to be their own. You authored the prompts, not the code. Somehow this was never a problem with pre-LLM codegen tools, like macro expanders, IPC glue, or type bundle generators. I don't recall anybody desperately removing the "auto-generated do not edit" comments those tools would nearly always slap at the top of each file or taking offense when someone called that code auto-generated. Back in the day we even used to publish the "real" human-written source for those, along with build scripts!

LelouBil 4 hours ago | parent [-]

It's weird, because they should not consider it as their own, but they should take accountability from it.

Ideally, if I contribute to any codebase, what needs to be judged is the resulting code. Is it up to the project's standards ? Does the maintainer have design objections ?

What tool you use shouldn't matter, be it your IDE or your LLM.

But that also means you should be accountable for it, you shouldn't defend behind "But Claude did this poorly, not me !", I don't care (in a friendly way), just fix the code if you want to contribute.

The big caveat to this is not wanting AI-Generated code for ideological reasons, and well, if you want that you can make your contributors swear they wrote it by themselves in the PR text or whatever.

I'm not really sure how to feel about this, but I stand by my "the code is what matters" line.

jpollock 4 hours ago | parent | prev | next [-]

Yes, it sets the reviewer's expectations around how much effort was spent reviewing the code before it was sent.

I regularly have tool-generated commits. I send them out with a reference to the tool, what the process is, how much it's been reviewed and what the expectation is of the reviewer.

Otherwise, they all assume "human authored" and "human sponsored". Reviewers will then send comments (instead of proposing the fix themselves). When you're wrangling several hundred changes, that becomes unworkable.

targafarian 6 hours ago | parent | prev | next [-]

Well is it actually being used as a tool where the author has full knowledge and mental grasp of what is being checked in, or has the person invoked the AI and ceded thought and judgment to the AI? I.e., I think in many cases the AI really is the author, or at least co-author. I want to know that for attribution and understanding what went into the commit. (I agree with you if it's just a tool.)

_heimdall 6 hours ago | parent [-]

I have worked with quite a few people committing code they didn't fully understand.

I don't meant this as a drive by bazinga either, the practice of copying code or thinking you understand it when you don't is nothing new

allajfjwbwkwja 6 hours ago | parent | next [-]

Pre-LLM, it was much easier for reviewers to discern that. Now, the AI-generated code can look like it was well thought out by somebody competent, when it wasn't.

jhide 5 hours ago | parent [-]

Have you ever reviewed an AI-generated commit from someone with insufficient competence that was more compelling than their work would be if it was done unassisted? In my experience it’s exactly the opposite. AI-generation aggravates existing blindspots. This is because, excluding malicious incompetence, devs will generally try to understand what they’re doing if they’re doing it without AI

bandrami 4 hours ago | parent | next [-]

I think the issue is not that the patches are more compelling but that they're significantly larger and more frequent

allajfjwbwkwja 5 hours ago | parent | prev | next [-]

I have. It's always more compelling in a web diff. These guys are the first coworkers for which it became absolutely necessary for me to review their work by pulling down all their code and inspecting every line myself in the context of the full codebase.

abustamam 4 hours ago | parent | prev [-]

I try to understand what the llm is doing when it generates code. I understand that I'm still responsible for the code I commit even if it's llm generated so I may as well own it.

enneff 4 hours ago | parent | prev [-]

Yes and if they copy and paste code they don’t understand then they should disclose that in the commit message too!

m4x 25 minutes ago | parent | prev | next [-]

Tools do author commits in my code bases, for example during a release pipeline. If I had commits being made by Claude I would expect that to be recorded too. It isn't for recording a bill of tools, just to help understand a projects evolution.

zarp 6 hours ago | parent | prev | next [-]

Sent from my iPhone

LeoPanthera 6 hours ago | parent | prev | next [-]

> Should I also co-author my PRs with my linter, intellisense and IDE?

Absolutely. That would be hilarious.

ambicapter 2 hours ago | parent | prev | next [-]

No, because those things don't change the logical underpinnings of the code itself. LLM-written code does act in ways different enough from a human contributor that it's worth flagging for the reviewer.

dml2135 2 hours ago | parent | prev | next [-]

Yea in my Claude workflow, I still make all the commits myself.

This is also useful for keeping your prompts commit-sized, which in my experience gives much better results than just letting it spin or attempting to one-shot large features.

itishappy 5 hours ago | parent | prev | next [-]

I suspect vibe coders might actually want you to consider turning to Claude for accountability and ownership rather than the human orchestrator.

If your linter is able to action requests, then it probably makes sense to add too.

paradox460 4 hours ago | parent | prev | next [-]

I've heard of employers requiring people to do it for all code written with even a whiff of it

jamietanna 7 hours ago | parent | prev | next [-]

Eh, there are some very good reasons[0] that you would do better to track your usage of LLM derived code (primarily for legal reasons)

[0]: https://www.jvt.me/posts/2026/02/25/llm-attribute/

butvacuum 6 hours ago | parent [-]

legally speaking.. if you're not sure of the risk- you don't document it.

alterom an hour ago | parent [-]

>legally speaking.. if you're not sure of the risk- you don't document it.

Ah, so you kinda maybe sorta absolve yourself of culpability (but not really — "I didn't know this was copyrighted material" didn't grant you copyright), and simultaneously make fixing the potentially compromised codebase (someone else's job, hopefully) 100x harder because the history of which bits might've been copied was never kept.

Solid advice! As ethical as it is practical.

By the same measure, junkyards should avoid keeping receipts on the off chance that the catalytic converters some randos bring in after midnight are stolen property.

Better not document it.

One little trick the legal folks don't want you to know!

bogdanoff_2 3 hours ago | parent | prev | next [-]

> Should I also co-author my PRs with my linter, intellisense and IDE?

Kinda, yeah. If I automatically apply lint suggestions, I would title my commit "apply lint suggestions".

NewsaHackO 3 hours ago | parent [-]

Huh? Unless the sole purpose of the commit was to lint code, it would be unnecessary fluff to append the name of the automatically linted tools that ran in a pre-commit hook in every commit.

EnigmaCurry 3 hours ago | parent | prev | next [-]

Sent from my Ipad

deadbabe 3 hours ago | parent | prev | next [-]

Could be cool if your PRs link back to a blog where you write about your tools.

xdennis 4 hours ago | parent | prev | next [-]

> The git history is expected to track accountability and ownership, not your Bill of Tools.

The point isn't to hijack accountability. It's free publicity, like how Apple adds "Sent from my IPhone."

sysguest 3 hours ago | parent | prev | next [-]

well maybe?

co-authoring doesn't hide your authorship

if I see someone committing a blatantly wrong code, I would wonder what tool they actually used

jmalicki 6 hours ago | parent | prev | next [-]

[dead]

Sharlin 6 hours ago | parent | prev [-]

You have copyright to a commit authored by you. You (almost certainly) don't have copyright (nobody has) to a commit authored by Claude.

graemep 6 hours ago | parent | next [-]

Where is there any legal precedent for that?

In some jurisdictions (e.g. the UK) the law is already clear that you own the copyright. In the US it is almost certain that you will be the author. The reports of cases saying otherwise I have been misreported - the courts found the AI could not own the copyright.

Sharlin 5 hours ago | parent | next [-]

It's beyond obvious that a LLM cannot have copyright, any more than a cat or a rock can. The question is whether anyone has or if whatever content generated by a LLM simply does not constitute a work and is thus outside the entire copyright law. As far as I can see, it depends on the extent of the user's creative effort in controlling the LLM's output.

graemep 5 hours ago | parent | next [-]

It may be obvious to you, but it has lead to at least one protracted court case in the US: Thaler v. Perlmutter.

> The question is whether anyone has or if whatever content generated by a LLM simply does not constitute a work and is thus outside the entire copyright law.

Its is going to vary with copyright law. In the UK the question of computer generated works is addressed by copyright law and the answer is "the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken"

Its also not a simple case of LLM generated vs human authored. How much work did the human do? What creative input was there? How detailed were the prompts?

In jurisdictions where there are doubts about the question, I think code is a tricky one. If the argument that prompts are just instructions to generate code, therefore the code is not covered by copyright, then you could also argue that code is instructions to a compiler to generate code and the resulting binary is not covered by copyright.

computerex 5 hours ago | parent | prev | next [-]

According to the law, if I use Claude to generate something, I hold the copyright granted Claude didn’t verbatim copy another project.

emkoemko an hour ago | parent [-]

why wouldn't antroipic own it? they generated it?

Aramgutang 4 hours ago | parent | prev [-]

It is not "beyond obvious" that a cat cannot have copyright, given the lawsuit about a monkey holding copyright [1], and the way PETA tried to used that case as precedent to establish that any animal can hold copyright.

[1] https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...

panny 5 hours ago | parent | prev [-]

>Where is there any legal precedent for that?

Thaler v. Perlmutter: The D.C. Circuit Court affirmed in March 2025 that the Copyright Act requires works to be authored "in the first instance by a human being," a ruling the Supreme Court left intact by declining to hear the case in 2026.

And in the US constitution,

https://constitution.congress.gov/browse/article-1/section-8...

Authors and inventors, courts have ruled, means people. Only people. A monkey taking a selfie with your camera doesn't mean you own a copyright. An AI generating code with your computer is likewise, devoid of any copyright protection.

graemep 5 hours ago | parent [-]

The Thaler ruling addresses a different point.

The ruling says that the LLM cannot be the author. It does not say that the human being using the LLM cannot be the author. The ruling was very clear that it did not address whether a human being was the copyright holder because Thaler waived that argument.

the position with a monkey using your camera is similar, and you may or may not hold the copyright depending on what you did - was it pure accident or did you set things up. Opinions on the well known case are mixed: https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...

Where wildlife photographers deliberately set up a shot to be triggered automatically (e.g. by a bird flying through the focus) they do hold the copyright.

panny 4 hours ago | parent [-]

Guidance on AI is unambiguous.

https://www.copyright.gov/ai/

AI generated code has no copyright. And if it DID somehow have copyright, it wouldn't be yours. It would belong to the code it was "trained" on. The code it algorithmically copied. You're trying to have your cake, and eat it too. You could maybe claim your prompts are copyrighted, but that's not what leaked. The AI generated code leaked.

graemep 4 hours ago | parent | next [-]

can you tell me where exactly in the documents you link to it says that?

drzaiusx11 3 hours ago | parent | prev [-]

The linked document labeled "Part 2: Copyrightability", section V. "Conclusions" states the following:

> the Copyright Office concludes that existing legal doctrines are adequate and appropriate to resolve questions of copyrightability. Copyright law has long adapted to new technology and can enable case-by- case determinations as to whether AI-generated outputs reflect sufficient human contribution to warrant copyright protection. As described above, in many circumstances these outputs will be copyrightable in whole or in part—where AI is used as a tool, and where a human has been able to determine the expressive elements they contain. Prompts alone, however, at this stage are unlikely to satisfy those requirements.

So the TL;DR basically implies pure slop within the current guidelines outlined in conclusions is NOT copyrightable. However collaboration with an AI copyrightability is determined on a case by case basis. I will preface this all with the standard IANAL, I could be wrong etc, but with the concluding language using "unlikely" copyrightable for slop it sounds less cut and dry than you imply.

panny 2 hours ago | parent [-]

That's typical of this site. I hand you a huge volume of evidence explaining why AI generated work cannot be copyrighted. You search for one scrap of text that seems to support your position even when it does not.

You have no idea how bad this leak is for Anthropic because with the copyright office, you have a DUTY TO DISCLOSE any AI generated work, and it is fully RETROACTIVE. And what is part of this leak? undercover.ts. https://archive.is/S1bKY Where Claude is specifically instructed to HIDE DISCLOSURE of AI generated work.

That's grounds for the copyright office and courts to reject ANY copyright they MIGHT have had a right to. It is one of the WORST things they could have done with regard to copyright.

https://www.finnegan.com/en/insights/articles/when-registeri...

drzaiusx11 9 minutes ago | parent [-]

I merely read the PDF articles you linked, then posted, verbatim, the primary relevant section I could find therein. Nowhere does it say that works involving humans in collaboration with AI can't be copyrighted. The conclusions linked merely state that copyright claims involving AI will be decided on a case by case basis. They MAY reject your claim, they may not. This is all new territory so it will get ironed out in time, however I don't think we've reached full legal consensus on the topic, even when limiting our scope to just US copyright law.

I'm interpreting your most recent reply to me as an implication that I'm taking the conclusions you yourself linked out of context. I'm trying to give the benefit of the doubt here, but the 3 linked PDF documents aren't "a mountain of evidence" supporting your argument. Maybe I missed something in one of those documents (very possible), but the conclusions are not how you imply.

Whether or not a specific git commit message correctly sites Claude usage or not may further muddy the waters more than IP lawyers are comfortable with at this time (and therefore add inherent risk to current and future copyright claims of said works), but those waters were far from crystal clear in the first place.

Again, IANAL, but from my limited layman perspective it does not appear the copyright office plans to, at this moment in time, concisely reject AI collaborated works from copyright.

Your most recent link (Finnegan) is from an IP lawyer consortium that says it's better to include attribution and disclosure of AI to avoid current and future claim rejections. Sounds like basic cover-your-ass lawyer speak, but I could be wrong.

Full disclosure: I primarily use AI (or rather agentic teams) as N sets of new eyeballs on the current problem at hand, to help debug or bounce ideas off of, so I don't really have much skin in this particular game involving direct code contributions spit out by LLMs. Those that have any risk aversion, should probably proceed with caution. I just find the upending of copyright (and many other) norms by GenAI morbidly fascinating.

_heimdall 6 hours ago | parent | prev [-]

Anthropic could at least make a compelling case for the copyright.

It becomes legally challenging with regards to ownership if I ever use work equipment for a personal project. If it later takes off they could very well try to claim ownership in its entirety simply because I ran a test once (yes, there's a while silicon valley season for it).

I don't know if they'd win, but Anthropic absolutely would be able to claim the creation of that code was done on their hardware. Obviously we aren't employees of theirs, though we are customers that very likely never read what we agreed to in a signup flow.

CobrastanJorji 5 hours ago | parent | next [-]

Using work equipment for a personal project only matters because you signed a contract giving all of your IP to your employer for anything you did with (or sometimes without) your employer's equipment.

Anthropic's user agreement does not have a similar agreement.

_heimdall 3 hours ago | parent [-]

My point was that they could make a compelling case though, not that they would win.

I don't know of ant precedent where the code was literally generated on someone else's system. Its an open question whether that implies any legal right to the work and I could pretty easily see a court accepting the case.

windexh8er 6 hours ago | parent | prev [-]

I think all you need to do is claim that your girlfriend is your laptop. /s

petcat 9 hours ago | parent | prev | next [-]

It's less about pretending to be a human and more about not inviting scrutiny and ridicule toward Claude if the code quality is bad. They want the real human to appear to be responsible for accepting Claud's poor output.

Stromgren 7 hours ago | parent | next [-]

That’s how I’d want it to be honestly. LLMs are tools and I’d hope we’re going to keep the people using them responsible. Just like any other tools we use.

vrosas 4 hours ago | parent [-]

It’s also pretty damn obvious when LLMs write code. Nobody out here commenting every method in perfect punctuation and grammar.

poopmonster 2 hours ago | parent [-]

skill issue

danbolt 35 minutes ago | parent [-]

Please elaborate, poopmonster! What sort of skills are required to expertly use an LLM?

otterley 9 hours ago | parent | prev [-]

That’s ultimately the right answer, isn’t it? Bad code is bad code, whether a human wrote it all, or whether an agent assisted in the endeavor.

abustamam 4 hours ago | parent [-]

Yeah in my team everyone knows everyone is using LLMs. We just have a rule. Don't commit slop. Using an LLM is no excuse for committing bad code.

otterley 9 hours ago | parent | prev | next [-]

I would have expected people (maybe a small minority, but that includes myself) to have already instructed Claude to do this. It’s a trivial instruction to add to your CLAUDE.md file.

arcanemachiner 8 hours ago | parent | next [-]

It's a config setting (probably the same end result though):

https://code.claude.com/docs/en/settings#attribution-setting...

dboreham 6 hours ago | parent | prev | next [-]

I always assumed that if I tried to tell it, that it'd say "I'm sorry, Dave. I'm afraid I can't do that"

schappim 7 hours ago | parent | prev [-]

I guess our system prompt didn't work. If folks are having to add it manually into their own Claude.md files...

schappim 34 minutes ago | parent | next [-]

Typo from speech to text, corrected: “I guess Anthropic’s system prompt didn't work. If folks are having to add it manually into their own Claude.md files...”

otterley 6 hours ago | parent | prev [-]

My mistake - it was the configuration setting that did it. Nevertheless, you can control many other aspects of its behavior by tuning the CLAUDE.md prompt.

peacebeard 6 hours ago | parent | prev | next [-]

The code has a stated goal of avoiding leaks, but then the actual implementation becomes broader than that. I see two possible explanations:

* The authors made the code very broad to improve its ability to achieve the stated goal

* The authors have an unstated goal

I think it's healthy to be skeptical but what I'm seeing is that the skeptics are pushing the boundaries of what's actually in the source. For example, you say "says on the tin" that it "pretends to be human" but it simply does not say that on the tin. It does say "Write commit messages as a human developer would" which is not the same thing as "Try to trick people into believing you're human." To convince people of your skepticism, it's best to stick to the facts.

mzajc 5 hours ago | parent | next [-]

By "says on the tin," I was referring to the name ("undercover mode") and the instruction to "not blow your cover." If pretending to be a human is not the cover here, what is? Additionally, does Claude code still admit that it's a LLM when this prompt is active as you suggest, or does it pretend to be a human like the prompt tells it to?

cat_plus_plus 5 hours ago | parent | prev [-]

Why are you assuming the actual implementation was authored by a human?

peacebeard 4 hours ago | parent [-]

My comment makes no such assumption.

andoando 9 hours ago | parent | prev | next [-]

Ive seen it say coauthored by claude code on my prs...and I agree I dont want it to do that

m132 7 hours ago | parent | next [-]

But I want to see Claude on the contributor list so that I immediately know if I should give the rest of the repo any attention!

dmd 8 hours ago | parent | prev | next [-]

So turn it off.

"includeCoAuthoredBy": false,

in your settings.json.

nullderef 7 hours ago | parent [-]

They changed it to `attribution`, but yes you can customize this

Pxtl 7 hours ago | parent | prev [-]

Why not? What's wrong with honesty?

dec0dedab0de 7 hours ago | parent | next [-]

Yeah I much prefer it commit the agent, and I would also like if it committed the model I was using at the time.

andoando 6 hours ago | parent | prev [-]

I guess Im sometimes dishonest when it suits me

hombre_fatal 9 hours ago | parent | prev | next [-]

You can already turn off "Co-Authored-By" via Claude Code config. This is what their docs show:

~/.claude/settings.json

    {
      "attribution": {
        "commit": "",
        "pr": ""
    },
The rest of the prompt is pretty clear that it's talking about internal use.

Claude Code users aren't the ones worried about leaking "internal model codenames" nor "unreleased model opus-4-8" nor Slack channel names. Though, nobody would want that crap in their generated docs/code anyways.

Seems like a nothingburger, and everyone seems to be fantasizing about "undercover mode" rather than engaging with the details.

5 hours ago | parent | prev | next [-]
[deleted]
nateoda 7 hours ago | parent | prev | next [-]

My first reaction is that they are using this to take advantage of OSS reviewers for in the wild evals.

zen928 7 hours ago | parent | prev | next [-]

None of this is really worrying, this is a pattern implemented in a similar way by every single developer using AI to write commit messages after noticing how exceptionally noisy they are to self-attribute things. Anthropics views on AI safety and alignment with human interests dont suddenly get thrown out with the bathwater because of leaked internal tooling of which is functionally identical to a basic prompt in a mere interface (and not a model). I dont really buy all the forced "skepticism" on this thread tbh.

sixtyj 7 hours ago | parent | prev [-]

People make fun that we should say magic words in interaction with LLMs. How frustrated can Claude be? /s