Remix.run Logo
bigstrat2003 14 hours ago

Understanding (not necessarily reading) always was the real work. AI makes people less productive because it's speeding up the thing that wasn't hard (generating code), while generating additional burden on the thing that was hard (understanding the code).

bwfan123 16 minutes ago | parent | next [-]

> Understanding (not necessarily reading) always was the real work.

Great comment. Understanding is mis-"understood" by almost everyone. :)

Understanding a thing equates to building a causal model of the thing. And I still do not see AI as having a causal model of my code even though I use it every day. Seen differently, code is a proof of some statement, and verifying the correctness of a proof is what a code-review is.

There is an analogue to Brandolini's bullshit asymmetry principle here. Understanding code is 10 times harder than reading code.

corndoge 13 hours ago | parent | prev | next [-]

There are many cases in which I already understand the code before it is written. In these cases AI writing the code is pure gain. I do not need to spend 30 minutes learning how to hold the bazel rule. I do not need to spend 30 minutes to write client boilerplate. List goes on. All broad claims about AI's effects on productivity have counterexamples. It is situational. I think most competent engineers quietly using AI understand this.

solid_fuel 9 hours ago | parent | next [-]

> There are many cases in which I already understand the code before it is written. In these cases AI writing the code is pure gain.

That's only true if the LLM understands the code in the same way you do - that is, it shares your expectations about architecture and structure. In my experience, once the architecture or design of an application diverges from the average path extracted from training data, performance seriously degrades.

You wind up with the LLM creating duplicate functions to do things that are already handled in code, or using different libraries than your code already does.

em-bee 8 hours ago | parent | prev | next [-]

In these cases AI writing the code is pure gain.

no, it isn't. unless the generated code is just a few lines long, and all you are doing is effectively autocompletion, you have to go through the generated code with a fine toothed comb to be sure it actually does what you think it should do and there are no typos. if you don't, you are fooling yourself.

corndoge an hour ago | parent [-]

Broadly I agree with you. I think of it in terms of responsibility. Ultimately the commit has my name on it, so I am the responsible party. From that perspective, I do need to "understand" what I am checking in to be reasonably sure it meets my professional standards of quality.

The reason I put scare quotes on "understand" is that we need to acknowledge that there are degrees of understanding, and that different degrees are required in different scenarios. For example, when you call syscall(), how well do you understand what is happening? You understand what's in the manpage; you know that it triggers a switch to kernel space, performs some task, returns some result. Most of us have not read the assembly code, we have a general concept of what is going on but the real understanding pretty much ends at the function call. Yet we check that in because that level of understanding corresponds to the general engineering standard.

In some cases, with AI, you can be reasonably sure the result is correct without deeply understanding it and still meet the bar. The bazel rule example is a good one. I prompt, "take this openapi spec and add build rules to generate bindings from it. Follow existing repo conventions." From my years of engineering experience, I already know what the result should look like, roughly. I skim the generated diff to ensure it matches that expectation; skim the model output to see what it referenced as examples. At that point, what the model produced is probably similar to what I would have produced by spending 30 minutes grepping around, reading build rules, et cetera. For this particular task, the model has saved me that time. I don't need to understand it perfectly. Either the code builds or it doesn't.

For other things, my standard is much higher. For example, models don't save me much time on concurrent code because, in order to meet the quality bar, the level of understanding required is much higher. I do need to sit there, read it, re-read it, chew on the concurrency model, et cetera. Like I said, it's situational.

There are many, many other aspects to quantifying the effects of AI on productivity, code quality is just one aspect. It's very holistic and dependent on you, how you work, what domain you work in, the technologies you work with, the team you work on, so many factors.

anileated 8 hours ago | parent | prev | next [-]

> There are many cases in which I already understand the code before it is written.

Typing speed is your bottleneck?

Analemma_ 13 hours ago | parent | prev | next [-]

The problem is, even if all that is true, it says very little about the distribution of AI-generated pull requests to GitHub projects. So far, from what I’ve seen, those are overwhelmingly not done by competent engineers, but by randos who just submit a massive pile of crap and expect you to hurry up and merge it already. It might be rational to auto-close all PRs on GitHub even if tons of engineers are quietly using AI to deliver value.

csomar 13 hours ago | parent | prev [-]

I mean we did copy/paste before this? Also create-react-app is basically that. And probably better than a stochastic AI generating it.

agumonkey 6 hours ago | parent | prev | next [-]

Very much disagree. When I type code I don't just press keys, I read, think, organize .. and the interplay between acting, seeing, watching, reevaluating was the fun part. There's a part of you that disappear if you only review the result of a generator. That's why it's less interesting imo

freehorse 6 hours ago | parent | prev | next [-]

As not all codebases are well-written, I have found useful once to get an LLM to produce code that does X, essentially distilling from a codebase that does XYZ. I found that reviewing the code the LLM producced, after feeding the original codebase in the context, was easier than going through the (not very well-written) codebase myself. Of course this was just the starting point, there was a ton of things the LLM "misunderstood", and then there was a ton of manual work, but it is an (admittedly rarer) example for me where "AI-generated" code is easier to read than code written by (those) humans, and it was actually useful having that at that point.

Ntrails 2 hours ago | parent | prev | next [-]

Question:

Which is harder, writing 200 lines of code or reading 200 lines of code someone else wrote.

I pretty firmly find the latter harder, which means for me AI is most useful for finessing a roughly correct PR rather than writing the actual logic from scratch.

jchanimal 14 hours ago | parent | prev | next [-]

It makes a great code reading tool if you use it mindfully. For instance, you can check the integrity of your tests by having it fuzz the implementation and ensure the tests fail and then git checkout to get clean again.

account42 4 hours ago | parent | prev | next [-]

Yeah I have always seen PRs from new contributors as having (on average) negative value but being an investment into a hopefully future positive contributor. I don't have that optimism for contributors that start out with AI slop.

mannanj 13 hours ago | parent | prev [-]

AI makes people less productive because it’s speeding up the thing that was hard: training AI for better future AI.

The productivity gets siphoned to the AI companies owning the AI.

ironbound 9 hours ago | parent | next [-]

You'll be unsurprised how many AI poison pill projects are on GitHub

trhway 10 hours ago | parent | prev [-]

That is how the main point of the Das Kapital looks in the modern, AI, world.