| ▲ | Animats 6 hours ago |
| > They can’t tell you what the AI got wrong. AI code generators are trolls. They confidently plausible content which is partly wrong. Then humans try to find their errors. This is not fun. It has no flow. |
|
| ▲ | simondotau 5 hours ago | parent | next [-] |
| I beg to differ, insofar as my own experience has been the exact opposite. I enjoy fixing other people's mistakes. And I especially enjoy outsmarting the LLMs. I find that I can obsessively breathe down the neck of an LLM for far longer than I could ever stay in the traditional flow state. |
| |
| ▲ | Terr_ 5 hours ago | parent | next [-] | | I think I might enjoy it for a little bit and then become very depressed at the idea that it will never end, a future of fixing things that should never have been broken in the first place and which won't stay fixed. | |
| ▲ | lelanthran 4 hours ago | parent | prev | next [-] | | > I find that I can obsessively breathe down the neck of an LLM for far longer than I could ever stay in the traditional flow state. I can do that too. Most programmers can. That's because it requires less skill! Critiquing something is always easier than doing it. I can literally keep an LLM fixing things forever by just saying things like "This is not scalable", or "this is not maintainable", or "this is not flexible" or "this is not robust", ... etc ad nausem. That doesn't take skill at the level to actually write the software. For the market which is hoping to switch to mostly LLM coding, the prize they are eyeing is skill devaluation and not just, as many think, productivity gains. They have no reason to double output, but they'd sure love to first halve the people employed, and then halve the salaries of those people (supply/demand + a glut of programmers in the market), and then halve salaries again because almost no skill necessary... | | |
| ▲ | bradleyjg 4 hours ago | parent [-] | | That's because it requires less skill! Critiquing something is always easier than doing it. No, it was always the other way around. Mediocre programmers always wanted to rewrite everything because reading and understanding an existing codebase was always harder than writing some greenfield thing with a “modern language” or “modern libraries” or “modern idioms.” So they’d go and do that and end up with 100x the bugs. | | |
| ▲ | layer8 an hour ago | parent | next [-] | | How is that “no” and “the other way around”? The desire to rewrite comes from the ease with which one can critique existing code for being “too hard” to understand. | |
| ▲ | lelanthran 3 hours ago | parent | prev | next [-] | | > Mediocre programmers always wanted to rewrite everything You are comparing writing something with rewriting something. You don't know what the difference is? | |
| ▲ | ffsm8 3 hours ago | parent | prev [-] | | You can't generalize that statement. There is a very valid reason why the Creator of erlang back in the day said something along the line of "you need to iteratively remake your software, improving it each time" As your knowledge about a topic grows, your initial mistaken implementation may become more and more obvious, and it may even mean a full rewrite. But yes, a person which instantly says "rewrite" before they understood the software is likely very inexperienced and has only worked with greenfield projects with few contributers (likely only themselves) before. |
|
| |
| ▲ | neonstatic 5 hours ago | parent | prev [-] | | Perhaps you have the psychological make up to thrive in this new environment. Glad it is working for you. |
|
|
| ▲ | cbg0 5 hours ago | parent | prev | next [-] |
| It should have the same flow as reviewing PRs from humans. |
| |
| ▲ | t43562 4 hours ago | parent | next [-] | | Who really truly enjoys that and doesn't see it as a chore? I find the real way to review other people's code is to program with it and then I start seeing where the problems are much more clearly. I would do a review and spot nothing important then start working on my own follow-on change and immediately run into issues. | | |
| ▲ | sampullman 4 hours ago | parent | next [-] | | I usually don't mind, but tend to split reviews into two types. Either I understand the context and can quickly do an in depth review, or I have to take some time to actually learn about the code by reviewing the surrounding systems, experimenting with it, etc. But in both cases I would at least run the code and verify correctness. I think it becomes a chore when there are too many trivial mistakes, and you feel like your time would have been better spent writing it yourself. As models and agent frameworks improve I see this happening less and less. | |
| ▲ | cbg0 4 hours ago | parent | prev [-] | | > Who really truly enjoys that and doesn't see it as a chore? This is a whole different discussion, but I just see it as part of the job that I'm getting paid for, I don't need to enjoy it to do it. Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code. Plenty of LLM-written code runs excellent until it doesn't, though we see this with human written code too, so it's more about investing more time in the hopes of spotting problems before they become problems. | | |
| ▲ | t43562 3 hours ago | parent [-] | | > Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code. Well, there you go. Letting AI write the tests is a mistake IMO. When I'm working with other people I write tests too and when I see their tests I know what they're missing out because I know the system and the existing tests. Sometimes I see the problem in their tests when I'm working on some of my own. If you absent yourself from that process then .... |
|
| |
| ▲ | fg137 3 hours ago | parent | prev | next [-] | | Which is a really, really bad idea. Most people don't spend nearly enough time going through a code review. They certainly don't think as hard as needed to question the implementation or come up with all the edge cases. It's active vs passive thinking. I, for one, have found numerous issues in other people's code that makes me wonder, "would they have ever made such a mistake if they hand coded this?" btw, a side effect is that nobody really understands the codebase. People just leave it to AI to explain what code does. Which is of course helpful for onboarding but concerning for complex issues or long term maintenance. | |
| ▲ | microtonal 4 hours ago | parent | prev [-] | | The problem is the LLMs completely change the equation. Before LLMs, beyond very junior (needs serious coaching) levels, reviewing was typically faster than writing the code that was reviewed. With LLMs, writing code is orders of magnitude faster than reviewing it. We already see open source projects getting buried in LLM slop and you have to find the real human or at least carefully curated contributions among the slop. I would not be surprised if many open source projects will outright stop taking PRs. I have had the same feeling several times - if I'm communicating with an LLM through the GitHub PR interface, I'd rather just directly talk to an LLM myself. But ending PRs is going to be painful for acquiring new contributors and training more junior people. Hopefully the tooling will evolve. E.g. I'd love have a system where someone has to open an issue with a plan first and by approving you could give them a 'ticket' to open a single PR for that issue. Though I would be surprised if GitHub and others would create features that are essentially there to rein in Copilot etc. |
|
|
| ▲ | catcowcostume 3 hours ago | parent | prev | next [-] |
| Anything AI generated is troll. There's no logic. It's just pattern repetitions. I don't get how supposedly smart engineers fall for it |
| |
| ▲ | barnabee 2 hours ago | parent [-] | | Because a lot of engineering is pattern repetition, which is not very fun for engineers either, and LLMs can do it much faster? | | |
| ▲ | skydhash 2 hours ago | parent [-] | | Not really. Any patterns got optimized and automated. If you’re still seeing patterns, then you need to look harder, because they will be similar onlu superficially. |
|
|
|
| ▲ | solumunus 6 hours ago | parent | prev [-] |
| [flagged] |