| ▲ | AlexeyBrin 14 hours ago | |
I wonder how this will work in practice. Say I'm a senior engineer and I produce myself thousands of lines of code per day with the help of LLMs as mandated by the company. I still need to presumably read and test the code that I push to production. When will I have time to read and evaluate similar amounts of code produced by a junior or a mid level engineer ? | ||
| ▲ | hrmtst93837 6 hours ago | parent | next [-] | |
Sign-off requirements like this quickly become performative when LLMs generate code faster than anyone can review it in detail. Relying on human oversight at scale is unrealistic unless the volume of changes drops or the review process itself becomes more automated. | ||
| ▲ | quantified 13 hours ago | parent | prev | next [-] | |
This is an important bottleneck. You can have LLM-based reviewers help you. But unless you yourself understood your thousands of lines, it's "somebody else's" code and that somebody else cannot be fired or taken to court. The presumably human mid-level or junior engineer has their own issues with this, but the point of the LLM is that you don't need that engineer. For productivity purposes, the dev org only needs the seniors to wrangle all the LLMs they can. That doesn't sustain, so a couple of more-junior engineers can do similar work to mature. | ||
| ▲ | MichaelRo 7 hours ago | parent | prev [-] | |
>> I wonder how this will work in practice. Say I'm a senior engineer and I produce myself thousands of lines of code per day with the help of LLMs as mandated by the company. LOL, it's the age old "responsibility without authority". The pressure to use AI will increase and basically you'll be fired for not using it. Simultaneously with the pressure to take the blame when AI fucks up and you can't keep up with the bullshit, leading you to get fired. One way or the other, get some training on how to stack shelves at the supermarket because that's how our future looks, one way or the other. | ||