| ▲ | pessimizer 15 hours ago | |
I don't think this makes any sense. I can see that long delays in public reporting might not be good for the near future, but a year from now all of the easily found stuff will have been found. At some point, everything will have hardened to a certain extent, new things will get scanned before they hit the streets, and the only bugs being found will rely a lot more on somebody's insight than the LLM used to test that insight. I think people are getting overly impressed/intimidated by tons of bugs being found by LLMs in a bunch of code that hasn't been looked at by more than a couple of people in years, or even at all since it was written. Those are going to run out. There won't be any code left that hasn't recently been looked over by an LLM. | ||
| ▲ | unknownhad 4 hours ago | parent | next [-] | |
I think this assumes software is a static target (Which it is not) . We are not just using LLMs to scan old code developers are using LLMs (like Copilot and others) to write new code and they are doing it by the shovel-load. The pace of shipping has gone up which means the pace of introducing new bugs has gone up right alongside it. The bug pool does not empty out because we keep refilling it every sprint. Plus, the definition of the "easily found stuff" is a moving target. The AI models aren't static either. What takes a human reverse-engineer a week of deep insight today might just be a standard automated API call by 2027. So while I would love for the dust to settle in a year, I think we are just looking at the new normal. Thanks for reading the post and for the great counter-point! | ||
| ▲ | 13 hours ago | parent | prev | next [-] | |
| [deleted] | ||
| ▲ | kennywinker 14 hours ago | parent | prev [-] | |
That makes sense to me, but in a world where code is generated by the shovel-load (see https://news.ycombinator.com/item?id=48073680) could the pace of introducing bugs not match or exceed the rate of finding them indefinitely? | ||