▲ | thunky 5 days ago | ||||||||||||||||
> Choosing to use a process known to be flawed, then hoping that people will catch the mistakes, doesn't seem like a great idea if the goal is quality. You're also describing the software development process prior to LLMs. Otherwise code reviews wouldn't exist. | |||||||||||||||||
▲ | Jensson 4 days ago | parent | next [-] | ||||||||||||||||
People have built complex working mostly bug free products without code reviews so humans are not that flawed. With humans and code reviews now two humans looked at it. With LLM and code review of the LLM output now one human looked at it, so its not the same. LLM are still far from as reliable as humans or you could just tell the LLM to do code reviews and then it builds the entire complex product itself. | |||||||||||||||||
| |||||||||||||||||
▲ | HarHarVeryFunny 5 days ago | parent | prev | next [-] | ||||||||||||||||
Sure - software development is complex, but there seems to be a general attempt over time to improve the process and develop languages, frameworks and practices that remove the sources of human error. Use of AI seems to be a regression in this regard, at least as currently used - "look ma, no hands! I've just vibe coded an autopiliot". The current focus seems to be on productivity - how many more lines of code or vibe-coded projects can you churn out - maybe because AI is still basically a novelty that people are still learning how to use. If AI is to be used productively towards achieving business goals then the focus is going to need to mature and change to things like quality, safety, etc. | |||||||||||||||||
▲ | rsynnott 4 days ago | parent | prev | next [-] | ||||||||||||||||
Code reviews are useful, but I think everyone would admit that they are not _perfect_. | |||||||||||||||||
▲ | 5 days ago | parent | prev [-] | ||||||||||||||||
[deleted] |