Remix.run Logo
Negitivefrags 6 hours ago

At my company I just tell people “You have to stand behind your work”

And in practice that means that I won’t take “The AI did it” as an excuse. You have to stand behind the work you did even if you used AI to help.

I neither tell people to use AI, nor tell them not to use it, and in practice people have not been using AI much for whatever that is worth.

themacguffinman 6 hours ago | parent | next [-]

The difference I see between a company dealing with this as opposed to an open source community dealing with this is that the company can fire employees as a reactive punishment. Drive-by open source contributions cost very little to lob over and can come from a wide variety of people you don't have much leverage over, so maintainers end up making these specific policies to prevent them from having to react to the thousandth person who used "The AI did it" as an excuse.

osigurdson 4 hours ago | parent [-]

When you shout "use AI or else!" from a megaphone, don't expect everyone to interpret it perfectly. Especially when you didn't actually understand what you were saying in the first place.

bjackman an hour ago | parent | prev | next [-]

Shouldn't this go without saying though? At some point someone has to review the code and they see a human name as the sender of the PR. If that person sees the work is bad, isn't it just completely unambiguous that the person whose name is on the PR is responsible for that? If someone responded "but this is AI generated" I would feel justified just responding "it doesn't matter" and passing the review back again.

And the rest (what's in the LLVM policy) should also fall out pretty naturally from this? If someone sends me code for review, and have the feeling they haven't read it themselves, I'll say "I'm not reviewing this and I won't review any more of your PRs unless you promise you reviewed them yourself first".

The fact that people seem to need to establish these things as an explicit policy is a little concerning to me. (Not that it's a bad idea at all. Just worried that there was a need).

lexicality 28 minutes ago | parent [-]

You would think it's common sense but I've received PRs that the author didn't understand and when questioned told me that the AI knows more about X than they do so they trust its judgement.

A terrifying number of people seem to think that the damn thing is magic and infallible.

EE84M3i 5 hours ago | parent | prev | next [-]

>I neither tell people to use AI, nor tell them not to use it, and in practice people have not been using AI much for whatever that is worth.

I find this bit confusing. Do you provide enterprise contracts for AI tools? Or do you let employees use their personal accounts with company data? It seems all companies have to be managing this somehow at this point.

jeroenhd 5 hours ago | parent | prev | next [-]

Some people who just want to polish their resume will feed any questions/feedback back into the AI that generated their slop. That goes back and forth a few times until the reviewing side learns that the code authors have no idea what they're doing. An LLM can easily pretend to "stand behind its work" if you tell it to.

A company can just fire someone who doesn't know what they're doing, or at least take some kind of measure against their efforts. On a public project, these people can be a death by a thousand cuts.

The best example of this is the automated "CVE" reports you find on bug bounty websites these days.

i2talics 6 hours ago | parent | prev | next [-]

What good does it really do me if they "stand behind their work"? Does that save me any time drudging through the code? No, it just gives me a script for reprimanding. I don't want to reprimand. I want to review code that was given to me in good faith.

At work once I had to review some code that, in the same file, declared a "FooBar" struct and a "BarFoo" struct, both with identical field names/types, and complete with boilerplate to convert between them. This split served no purpose whatsoever, it was probably just the result of telling an agent to iterate until the code compiled then shipping it off without actually reading what it had done. Yelling at them that they should "stand behind their work" doesn't give me back the time I lost trying to figure out why on earth the code was written this way. It just makes me into an asshole.

sb8244 5 hours ago | parent | next [-]

It adds accountability, which is unfortunately something that ends up lacking in practice.

If you write bad code that creates a bug, I expect you to own it when possible. If you can't and the root cause is bad code, then we probably need to have a chat about that.

Of course the goal isn't to be a jerk. Lots of normal bugs make it through in reality. But if the root cause is true negligence, then there's a problem there.

AI makes negligence much easier to achieve.

nineteen999 5 hours ago | parent | prev | next [-]

If you asked Claude to review the code it would probably have pointed out the duplication pretty quickly. And I think this is the thing - if we are going to manage programmers who are using LLM's to write code, and have to do reviews for their code, reviewers aren't going to be able to do it for much longer without resorting to LLM assistance themselves to get the job done.

It's not going to be enough to say - "I don't use LLM's".

nradov 5 hours ago | parent | prev [-]

Yelling at incompetent or lazy co-workers isn't your responsibility, it's your manager's. Escalate the issue and let them be the asshole. And if they don't handle it, well it's time to look for a new job.

skeeter2020 5 hours ago | parent [-]

>> Yelling at incompetent or lazy co-workers isn't your responsibility, it's your manager's

First: Somebody hired these people, so are they really "lazy and incompetent"?

Second: There is no one who's "job" is to yell at incompetent or lazy workers.

darth_avocado 6 hours ago | parent | prev | next [-]

> At my company I just tell people “You have to stand behind your work”

Since when has that not been the bare minimum. Even before AI existed, and even if you did not work in programming at all, you sort of have to do that as a bare minimum. Even if you use a toaster and your company guidelines suggest you toast every sandwich for 20 seconds, if following every step as per training results in a lump of charcoal for bread, you can’t serve it up to the customer. At the end of the day, you make the sandwich, you’re responsible for making it correctly.

Using AI as a scapegoat for sloppy and lazy work needs to be unacceptable.

Negitivefrags 6 hours ago | parent | next [-]

Of course it’s the minimum standard, and it’s obvious if you view AI as a tool that a human uses.

But some people view it as a seperate entity that writes code for you. And if you view AI like that, then “The AI did it” becomes an excuse that they use.

atoav 6 hours ago | parent [-]

"Yes, but you submitted it to us."

If you're illiterate and can't read maybe don't submit the text someone has written for you if you can't even parse the letters.

fourthark 5 hours ago | parent [-]

The policy in TFA is a nicer way of saying that.

fwipsy 6 hours ago | parent | prev [-]

Bad example. If the toaster carbonized bread in 20 seconds it's defective, likely unsafe, possibly violates physics, certainly above the pay grade of a sandwich-pusher.

Taking responsibility for outcomes is a powerful paradigm but I refuse to be held responsible for things that are genuinely beyond my power to change.

This is tangential to the AI discussion though.

darth_avocado 6 hours ago | parent | next [-]

> If the toaster carbonized bread in 20 seconds it's defective, likely unsafe, possibly violates physics, certainly above the pay grade of a sandwich-pusher.

If the toaster is defective, not using it, identifying how to use it if it’s still usable or getting it replaced by reporting it as defective are all well within the pay grade of a sandwich pusher as well as part of their responsibilities.

And you’re still responsible for the sandwich. You can’t throw up your arms and say “the toaster did it”. And that’s where it’s not tangential to the AI discussion.

Toaster malfunctioning is beyond your control, but whether you serve up the burnt sandwich is absolutely within your control, which you will be and should be held responsible for.

dullcrisp 6 hours ago | parent | prev | next [-]

No it’s not. If you burn a sandwich, you make new sandwich. Sandwiches don’t abide by the laws of physics. If you call a physicist and tell them you burnt your sandwich, they won’t care.

6 hours ago | parent | prev | next [-]
[deleted]
atoav 6 hours ago | parent | prev [-]

I think it depends on the pay. You pay below the living wage? Better live with your sla.. ah employees.. serving charcoal. You pay them well above the living wage? Now we start to get into they should care-territory.

anonzzzies 6 hours ago | parent | prev | next [-]

But "AI did it" is not immediate you are out thing? If you cannot explain why something is made the way you committed to git, we can just replace you with AI right?

EagnaIonat 5 hours ago | parent | next [-]

> we can just replace you with AI right?

Accountability and IP protection is probably the only thing saving someone in that situation.

tjr 6 hours ago | parent | prev [-]

Why stop there? We can replace git with AI too!

ronsor 6 hours ago | parent [-]

If you generate the code each time you need it, all version control becomes obsolete.

verbify 5 hours ago | parent [-]

They'll version control the prompts because the requirements change.

ronsor 5 hours ago | parent [-]

Not if we AI-generate the requirements!

bitwize 6 hours ago | parent | prev [-]

The smartest and most sensible response.

I'm dreading the day the hammer falls and there will be AI-use metrics implemented for all developers at my job.

locusofself 6 hours ago | parent [-]

It's already happened at some very big tech companies

skeeter2020 5 hours ago | parent [-]

One of the reasons I left a senior management position at my previous 500-person shop was that this was being done, but not even accurately. Copilot usage via the IDE wasn't being tracked; just the various other usage paths.

It doesn't take long for shitty small companies to copy the shitty policies and procedures of successful big companies. It seems even intelligent executives can't get correlation and causation right.