| ▲ | mikkupikku 7 hours ago |
| A whole lot of people find LLM code to be strictly objectionable, for a variety of reasons. We can debate the validity of those reasons, but I think that even if those reasons were all invalid, it would still be unethical to deceive people by a deliberate lie of omission. I don't turn it off, and I don't think other people should either. |
|
| ▲ | tehsauce 6 hours ago | parent | next [-] |
| For the purpose of disclosure, it should say “Warning: AI generated code” in the commit message, not an advertisement for a specific product. You would never accept any of your other tools injecting themselves into a commit message like that. |
| |
|
| ▲ | ndriscoll 5 hours ago | parent | prev | next [-] |
| My tools just don't add such comments. I don't know why I would care to add that information. I want my commits to be what and why, not what editor someone used. It seems like cruft to me. Why would I add noise to my data to cater to someone's neuroticism? At least at my workplace though, it's just assumed now that you are using the tools. |
| |
| ▲ | sysguest 4 hours ago | parent | next [-] | | well if I know a specific LLM has certain tendencies (eg. some model is likely to introduce off-by-one errors), I would know what to look for in code-review I mean, of course I would read most of the code during review, but as a human, I often skip things by mistake | |
| ▲ | emkoemko 2 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | tshaddox 6 hours ago | parent | prev | next [-] |
| If a whole of people thought that running code through a linter or formatter was objectionable, I'd probably just dismiss their beliefs as invalid rather than adding the linter or formatter as a co-author to every commit. |
| |
| ▲ | jacquesm 4 hours ago | parent | next [-] | | A linter or a formatter does not open you up to compliance and copyright issues. | |
| ▲ | mikkupikku 6 hours ago | parent | prev | next [-] | | Like frying a veggie burger in bacon grease. Just because somebody's beliefs are dumb doesn't mean we should be deliberately tricking them. If they want to opt out of your code, let them. | | |
| ▲ | sysguest 4 hours ago | parent | next [-] | | > frying a veggie burger in bacon grease hmm gotta try that | | |
| ▲ | jitl 3 hours ago | parent [-] | | I love black bean burgers (bongo burger near Berkeley is my classic), sounds like an interesting twist | | |
| ▲ | jasomill 3 hours ago | parent [-] | | Never fried one in bacon grease, but they are good with bacon and cheese. I have had more than one restaurant point out that their bacon wasn't vegetarian when ordering, though. |
|
| |
| ▲ | block_dagger an hour ago | parent | prev [-] | | In your view, those who prefer veggie burgers are dumb. Am I misinterpreting? |
| |
| ▲ | runarberg 4 hours ago | parent | prev [-] | | Linters and formatters are different tools then LLMs. There is a general understanding that linters and formatters don’t alter the behavior of your program. And even still most projects require a particular linter and a formatter to pass before a PR is accepted, and will flag a PR as part of the CI pipeline if a particular linter or a particular formatter fails on the code you wrote. This particular linter and formatter is very likely to be mentioned somewhere in the configuration or at least in the README of the project. |
|
|
| ▲ | mathgradthrow 4 hours ago | parent | prev | next [-] |
| I'm not really sure that's any of their business. |
|
| ▲ | josephg 7 hours ago | parent | prev [-] |
| Likewise. I don’t mind that people use LLMs to generate text and code. But I want any LLM generated stuff to be clearly marked as such. It seems dishonest and cheap to get Claude to write something and then pretend you did all the work yourself. |
| |
| ▲ | rogerrogerr 6 hours ago | parent | next [-] | | The reason I want it to be marked as such is because I review AI code differently than human code - it just makes different kinds of mistakes. | | |
| ▲ | heyethan 3 hours ago | parent [-] | | I think the issue is less attribution and more review mode.
If I assume a change was written and checked line-by-line by the author, I review it one way.
If an LLM had a big hand in it, I review it another way. |
| |
| ▲ | pxc 6 hours ago | parent | prev | next [-] | | You can disclose that you used an LLM in the process of writing code in other ways, though. You can just tell people, you can mention it in the PR, you can mention it in a ticket, etc. | | |
| ▲ | ruraljuror 5 hours ago | parent [-] | | +1. If we’re at an early stage in the agentic curve where we think reading commit messages is going to matter, I don’t want those cluttered with meaningless boilerplate (“co-authored by my tools!”). But at this point i am more curious if git will continue to be the best tool. | | |
| ▲ | pxc 5 hours ago | parent [-] | | I'm only beginning to use "agentic" LLM tools atm because we finally gained access to them at work, and the rest of my team seems really excited about using them. But for me at least, a tool like Git seems pretty essential for inspecting changes and deciding which to keep, which to reroll, and which to rewrite. (I'm not particularly attached to Git but an interface like Magit and a nice CLI for inspecting and manipulating history seem important to me.) What are you imagining VCS software doing differently that might play nicer with LLM agents? |
|
| |
| ▲ | dml2135 4 hours ago | parent | prev | next [-] | | So if I use Claude to write the first pass at the code, make a few changes myself, ask it to make an additional change, change another thing myself, then commit it — what exactly do you expect to see then? | | |
| ▲ | m132 4 hours ago | parent [-] | | A Co-Authored-By tag on the commit. It's a standard practice and the meaning is self-explanatory. This is what Claude adds by default too. |
| |
| ▲ | Fr0styMatt88 6 hours ago | parent | prev [-] | | I guess if enough people use it, doesn’t the tag become kind of redundant? Almost like writing “Code was created with the help of IntelliSense”. | | |
| ▲ | josephg 2 hours ago | parent [-] | | I don't think so. The tag doesn't just say "this was written by an LLM". It says which LLM - which model - authored it. As LLMs get more mature, I expect this information will have all sorts of uses. It'll also become more important to know what code was actually written by humans. |
|
|