Remix.run Logo
simonw 4 days ago

100%. There's no difference at all in my mind between an AI-assisted PR and a regular PR: in both cases they should include proof that the change works and that the author has put the work in to test it.

oceanplexian 4 days ago | parent | next [-]

At the last company I worked at (Large popular tech company) it took an act of the CTO to get engineers to simply attach a JIRA Ticket to the PR they were working on so we could track it for tax purposes.

The Devs went in kicking and screaming. As an SRE it seemed like for SDEs, writing a description of the change, explaining the problem the code is solving, testing methodology, etc is harder than actually coding. Ironically AI is proving that this theory was right all along.

sodapopcan 4 days ago | parent | next [-]

Complaining about including a ticket number in the commit is a new one for me. Good grief.

rootusrootus 4 days ago | parent [-]

It could be a death-by-a-thousand-cuts situation and we don't have enough context. My company has spent the last few years really going 1000% on the capitalization of software expenses, and now we have to include a whole slew of unrelated attributes in every last Jira ticket. Then the "engineering team" (there is only one of these, somehow, in a 5K employee company) decrees all sorts of requirements about how we test our software and document it, again using custom Jira attributes to enforce. Developers get a little pissy about being messed with by MBAs and non-engineer "engineers" trying to tell them how to do their job. (as an aside, for anybody who is on the giving end of such requirements, I have to tell you that people working the tickets will happily lie on all of that stuff just to get past it as quickly as possible, so I hope you're not relying on it for accuracy)

But putting the ticket number in the commit ... that's basically automatic, I don't know why it should be that big a concern. The branch itself gets created with the ticket number and everything follows from that, there's no extra effort.

comfydragon 4 days ago | parent | next [-]

> The branch itself gets created with the ticket number and everything follows from that, there's no extra effort.

Only problem there is the potential for a deeply-ingrained assumption that the Jira key being in the branch name is sufficient for the traceability between the Jira issue and commits to always exist. I've had to remind many people I work with that branch names are not forever, but commit messages are.

Hasn't quite succeeded in getting everyone to throw a Jira ID in somewhere in the changeset, but I try...

necovek 3 days ago | parent [-]

It sounds pretty simple to automate that away too: make it part of the merge hook to include the source branch name into the message.

We are engineers, everything is a problem waiting to be automated :)

cesarb 4 days ago | parent | prev | next [-]

> But putting the ticket number in the commit ... that's basically automatic, I don't know why it should be that big a concern. The branch itself gets created with the ticket number and everything follows from that, there's no extra effort.

That poster said "attach a JIRA Ticket to the PR", so in their case, it's not that automatic.

rootusrootus 4 days ago | parent | next [-]

A lot of Jira shops use the rest of the stack, so it becomes automatic. The branch is named automatically when created from a link on the Jira task. Every time you push it gives you a URL for opening the PR if you want, and everything ends up pre-filled. All of the Atlassian tools recognize the format of a task ID and hyperlink automatically.

I haven't dealt with non-Atlassian tools in a while but I assume this is pretty much bog standard for any enterprise setup.

alexpotato 4 days ago | parent | prev [-]

If you are using the Atlassian Git clone then just putting the JIRA ticket in the title automagically links the PR to the ticket.

sodapopcan 4 days ago | parent | prev [-]

Ah ya, death-by-a-thousand-cuts is certainly a charitable take!

necovek 3 days ago | parent | prev | next [-]

Invite engineers to solve it in a way that makes it cheap for them.

Most shops I've been at prefix their branch names with ticket numbers ("bug-X-" or "TCKT-Y-"), and then it's trivial to reference it back. Some will write scripts on top, which gets them even more motivated to solve your problem (and might add links into the tracking tools too, move the ticket to "In Review" when the PR is up, close it after it's merged...).

p2detar 4 days ago | parent | prev | next [-]

Strange, I thought this is actually the norm. Our PRs are almost always tagged with a corresponding Jira ticket. I think this is more helpful to developers than to other roles, because it allows them to have history of what has been fixed.

One can also point QA or consultants to a ticket for documentation purposes or timeline details.

4 days ago | parent | prev [-]
[deleted]
babarock 4 days ago | parent | prev [-]

You're not wrong, however the issue is that it's not always easy to detect if a PR includes proof that the change works. It requires that the reviewer interrupts what they're doing, switch context completely and look at the PR.

If you consider that reviewer bandwidth is very limited in most projects AND that the volume of low-effort-AI-assisted PR has grown incredibly over the past year, now we have a spam problem.

Some of my engineers refuse to review a patch if they detect that it's AI-assisted. They're wrong, but I understand their pain.

wiml 4 days ago | parent [-]

I don't think we're talking about merely "AI-assisted" PRs here. We're talking about PRs where the submitter has not read the code, doesn't understand it, and can't be bothered to describe what they did and why.

As a reviewer with limited bandwidth, I really don't see why I should spend any effort on those.

atomicnumber3 4 days ago | parent [-]

"We're talking about PRs where the submitter has not read the code, doesn't understand it, and can't be bothered to describe what they did and why."

IME, "AI" PRs are categorically that kind of PR. I find, and others around me in my org have agreed, that if you actually do all that you describe, the actual net time savings of AI are often (for a mid-level dev or above) either net 0 or negative.

I personally have used the phrase "baptized the AI out of it" describing my own PRs... Where I may have initially used AI to generate a bunch of the code, looked at it and went "huh neat that actually looks pretty right, this is almost done." Then I generate unit tests. Then I fix the unit tests to not be shit. Then i find bugs in the AI-generated code. Then upon pondering the code a bit, or maybe while fixing the bugs, I find the abstractions it created are clunky, so I refactor it a bit... and by the time I'm done there's not a lot of AI left in the PR, it's all me.