| ▲ | alphazard 8 months ago | |||||||||||||||||||||||||
Different people clearly mean different things when they talk about software quality. There is quality as perceived by the user: few bugs, accurately models the problem they have, no more complicated than necessary, etc. Then there is this other notion of quality as something to do with how the software is built. How neat and clear it is. How easy it is to extend or change. The first kind of quality is the only kind that matters in the end. The second kind has mattered a lot up until now because of how involved humans are in typing up and editing software. It doesn't need to matter going forward. To a machine, the entire application can be rewritten just as easily as making a small change. I would gladly give up all semblance of the second kind of quality in exchange for formal specifications and testing methods, which an AI goes through the trouble of satisfying for me. Concepts and models matter in the problem domain (assuming humans are the ones using the software), but they will increasingly have no place in the solution domain. | ||||||||||||||||||||||||||
| ▲ | spamizbad 8 months ago | parent | next [-] | |||||||||||||||||||||||||
The second type of quality is necessary to achieve the first type of quality for systems with nontrivial levels of complexity. It doesn’t need to be perfect, or even close to perfect, but it does need to be “good enough” - Your end users will eventually notice how long bugs take to get fixed, how long and how often outages occur, and how long it takes to get new functionality into your software. But beyond your end-users, you likely have competitors: and if your competitors start moving faster, build a reputation of dependability and responsiveness, your business WILL suffer. You will see attrition, your CAC will go up, and those costs get absorbed somewhere: either in less runway, less capex/opex (layoffs), higher priced or all of the above. And that’s an entire domain AI isn’t (yet) suited to assist. There’s no free lunch. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | bluefirebrand 8 months ago | parent | prev | next [-] | |||||||||||||||||||||||||
> The first kind of quality is the only kind that matters in the end. How easy it is to maintain and extend does absolutely matter, in a world where software is constantly growing and evolving and never "finished" | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | SCdF 8 months ago | parent | prev | next [-] | |||||||||||||||||||||||||
> The first kind of quality is the only kind that matters in the end. Yes. But the first kind of quality is enabled with the second kind. Until we live in a faultless closed loop[1], where with AI "the entire application can be rewritten just as easily as making a small change." you still need the second kind. [1] and it's debatable if we ever will | ||||||||||||||||||||||||||
| ▲ | armchairhacker 8 months ago | parent | prev | next [-] | |||||||||||||||||||||||||
The problem domain is part of the solution domain: writing a good specification and tests is a skill. Moreover, I suspect the second kind of quality won't completely go away: a smart machine will develop new techniques to organize its code (making it "neat and clear" to the machine), which may resemble human techniques. I wouldn't bet much on it, but maybe even, buried within the cryptic code output by a machine, there will be patterns resembling popular design patterns. Brute force can get results faster than careful planning, but brute force and planning gets results faster than both. AI will keep being optimized (even if one day it starts optimizing itself), and organization is presumably a good optimization. Furthermore: LLMs think differently than humans, e.g. they seem to have much larger "context" (analogous to short-term memory) but their training (analogous to long-term memory) is immutable. Yet there are similarities as demonstrated in LLM responses, e.g. they reason in English, and reach conclusions without understanding the steps they took. Assuming this holds for later AIs, the structures those AIs organize their code into to make it easier to understand, probably won't be the structures humans would create, but they'll be similar. Although a different type of model and much smaller, there's evidence of this in auto-encoders: they work via compression, which is a form of organization, and the weights roughly correspond to human concepts like specific numbers (MNIST) or facial characteristics (https://www.youtube.com/watch?v=4VAkrUNLKSo&t=352). | ||||||||||||||||||||||||||
| ▲ | allenu 8 months ago | parent | prev | next [-] | |||||||||||||||||||||||||
> The first kind of quality is the only kind that matters in the end. From a business perspective, this is what's exciting to a lot of people. I think we have to recognize that a lot of products fail not because the software was written poorly, but because the business idea wasn't very good. If a business is able to spin up its product using some aspect of vibe coding to test out its merits, and is able to explore product-market fit more quickly, does it really matter if the code quality is bad? Likewise, a well-crafted product can still fail because either the market shifted (maybe it took too long to produce) or because there really wasn't a market for it to begin with. Obviously, there's a middle ground here, and if you go too far with vibe coding and produce something that constantly fails or is hard to maintain, then maybe you've gone too far, but it's a balance that needs to be weighed against business risk. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | pyfon 8 months ago | parent | prev | next [-] | |||||||||||||||||||||||||
You are talking about an imagined future not current reality. An AI will be as flustered by spaghetti as a human. Or not so much flustered it will just make willy willy changes and end up in an expensive infinite loop of test failures and drunken changes to try and fix them. | ||||||||||||||||||||||||||
| ▲ | userbinator 8 months ago | parent | prev | next [-] | |||||||||||||||||||||||||
The problem is that the first kind of quality is something that's hard for even human programmers to do well, while AI is, like the rest of the tools that came before, much better at the second. | ||||||||||||||||||||||||||
| ▲ | echelon 8 months ago | parent | prev [-] | |||||||||||||||||||||||||
> The second kind has mattered a lot up until now because of how involved humans are in typing up and editing software. It doesn't need to matter going forward. Tell that to the vagus nerve in the giraffe. | ||||||||||||||||||||||||||