| |
| ▲ | thyristan 7 days ago | parent | next [-] | | > [create prototype], then throw out the code and write a proper solution. Problem is, that in everyones' experience, this almost never happens. The prototype is declared "good enough, just needs a few small adjustments", rewrite is declared too expensive, too time-consuming. And crap goes to production. | | |
| ▲ | ceuk 7 days ago | parent | next [-] | | Watching was supposed to be a prototype become the production code is one of the most constant themes of my 20 year career | | |
| ▲ | jmathai 7 days ago | parent [-] | | Software takes longer to develop than other parts of the org want to wait. AI is emerging as a possible solution to this decades old problem. | | |
| ▲ | thyristan 7 days ago | parent | next [-] | | Everything takes longer than ppl want to wait. But when building a house, ppl are more patient and tolerant about the time taken, because they can physically see the progress, the effort, the sweat. Software is intangible and invisible except maybe for beta-testers and developer liaisons. And the visual parts, like the nonfunctional GUI or web UI, are often taken as "most of the work is done", because that is what people see and interact with. | | |
| ▲ | jmathai 7 days ago | parent [-] | | It's product management's job to bridge that gap. Break down and prioritize complex projects into smaller deliverables that keep the business folks happy. It's better than houses, IMO - no one moves into the bedroom once it's finished while waiting for the kitchen. |
| |
| ▲ | zppln 7 days ago | parent | prev | next [-] | | No, the org will still have to wait for the requirements, which is what they were waiting for all along. | |
| ▲ | dudefeliciano 7 days ago | parent | prev | next [-] | | until the whole company fails because lack of polishing and security in the software. Think tea app openly accessible databases... | | | |
| ▲ | YeGoblynQueenne 7 days ago | parent | prev | next [-] | | Or as a new problem that it will persist for decades to come. | |
| ▲ | ozim 7 days ago | parent | prev [-] | | I don’t really see this as universal truth with corporate customers stalling process for up to 2 years or end users being reluctant to change. We were deploying new changes every 2 weeks and it was too fast. End users need training and communication, pushback was quite a thing. We also just pushed back aggressive timeline we had for migration to new tech. Much faster interface with shorter paths - but users went all pitchforks and torches just because it was new. But with AI fortunately we will get rid of those pesky users right? | | |
| ▲ | thyristan 7 days ago | parent [-] | | Different situation. You already had a product that they were quite happy with, and that worked well for them. So they saw change as a problem, not a good thing. They weren't waiting for anything new, or anything to improve, they were happy on their couch and you made them move to redo the upholstery. | | |
| ▲ | ozim 7 days ago | parent [-] | | They were not happy otherwise we would not have new requirements. Well maybe they were happy but software needs to be updated to new business processes their company was rolling out. Managers wanted the changes ASAP - their employees not so much, but they had to learn that hard way. Not so fun part was that we got the blame. Just like I got down vote :), not my first rodeo. |
|
|
|
| |
| ▲ | worldsayshi 7 days ago | parent | prev | next [-] | | Yes, that's how it is. And that is a separate problem. And it also shifts the narrative a bit more towards 'the bottleneck is writing good code'. | |
| ▲ | lwhi 7 days ago | parent | prev | next [-] | | This is the absolute reality. I think we'll need to see some major f-ups before this current wave matures. | |
| ▲ | sdeframond 7 days ago | parent | prev [-] | | > Problem is How much is it a problem, really ? I mean, what are the alternatives ? | | |
| ▲ | thyristan 7 days ago | parent [-] | | The alternative is obviously: Do it right on the first try. How much of a problem it is can be seen with tons of products that are crap on release and only slowly get patched to a half-working state when the complaints start pouring in. But of course, this is status quo in software, so the perception of this as a problem among software people isn't universal I guess. | | |
| ▲ | sdeframond 7 days ago | parent [-] | | Sure. How about the tons of products we don't even see? Those that tried to do it right on the first try, then never delivered anything because there were too slow and expensive. Or those that delivered something useless because they did not understand the users' need. If "complaints start pouring in", that means the product is used. This in turns can mean two things: 1/ the product is actually useful despite its flaws, or 2/ the users have no choice, which is sad. | | |
| ▲ | thyristan 7 days ago | parent | next [-] | | > How about the tons of products we don't even see? Those that tried to do it right on the first try, then never delivered anything because there were too slow and expensive. I would welcome seeing a lesser amount of new crappy products. That dynamic leads to a spiral of ever crappier software: You need to be first, and quicker than your competitors. If you are first, you do have a huge advantage, because there are no other products and there is no alternative to your crapware. Coming out with a superior product second or third sometimes works, but very often doesn't, you'll be an also-ran with 0.5% market share, if you survive at all. So everyone always tries to be as crappy and as quick as possible, quality be damned. You can always fix it later, or so they say. But this view excludes the users and the general public: Crapware is usually full of security problems, data leaks, harmful bugs that endanger peoples' data, safety, security and livelihood. Even if the product is actually useful, at first, in the long term the harm might outweigh the good. And overall, by the aforementioned spiral, every product that wins this way damages all other software products by being a bad example. Therefore I think that software quality needs some standards that programmers should uphold, that legislators should regulate and that auditors should thoroughly check. Of course that isn't a simple proposition... | | |
| ▲ | tartoran 7 days ago | parent [-] | | I agree. Crapware is crapware by design not because there was a good idea but the implementation lacked. We're blessed that poor ideas were bogged down by poor implementation. I'm sure few good things may have slipped through the cracks but it's a small price to pay. |
| |
| ▲ | bonoboTP 7 days ago | parent | prev [-] | | Exactly. There is a reason for the push. The natural default of many engineers is to "do things properly", which often boils down to trying to guess all kinds of possible future extensions (because we have to get the foundations and the architecture right), then everything becomes abstracted and there's this huge framework that is designed to deal with hypothetical future needs in an elegant and flexible way with best practices etc. etc. And as time passes the navel-gazing nature of the project grows, where you add so much abstraction that you need more stuff to manage the abstraction, generate templates that generate the config file to manage the compilation of the config file generator etc. Not saying this happens always, but that's what people want to avoid when they say they are okay with a quick hack if it works. |
|
|
|
| |
| ▲ | camgunz 7 days ago | parent | prev [-] | | Coding is how I build a sufficiently deep understanding of the problem space--there's no separating coding and understanding for me. I acknowledge there's different ways of working (and I imagine this is one of the reasons a lot of people think they get a lot more value out of LLMs than I do), but like, having Cursor crank code out for me actually slows me down. I have to read all the stuff it does so I can coach it into doing better, and also use its work to build a good mental model of the problem, and all that takes longer than writing the code myself. | | |
| ▲ | thyristan 7 days ago | parent | next [-] | | Well, actually there could be a separate step: understanding is done during and after gathering requirements, before and while writing specifications. Only then are specifications turned into code. But almost no-one really works like that, and those three separate steps are often done ad-hoc, by the same person, right when the fingers hit the keys. | | |
| ▲ | camgunz 7 days ago | parent | next [-] | | I can use those processes to understand things at a high level, but when those processes become detailed enough to give me the same level of understanding as coding, they're functionally code. I used to work in aerospace, and this is the work systems engineers are doing, and their output is extremely detailed--practically to the level of code. There's downsides of course, but the division of labor is nice because they don't need to like, decide algorithms or factoring exactly, and I don't need to be like, "hmm this... might fail? should there be a retry? what about watchdog blah blah". | |
| ▲ | naasking 7 days ago | parent | prev | next [-] | | > Well, actually there could be a separate step: understanding is done during and after gathering requirements, before and while writing specifications. Only then are specifications turned into code. The promise of coding AI is that it can maybe automate that last step so more intelligent humans can actually have time to focus on the more important first parts. | |
| ▲ | Ma8ee 6 days ago | parent | prev [-] | | We used to call that Waterfall, and it has been frowned upon for a while now. So we went full circle, again. | | |
| ▲ | thyristan 6 days ago | parent [-] | | Waterfall is a caricature straw man process where you can never ever go back to the drawing board and change the requirements or specifications. The defining characteristic is the part where design up front, you can never go back and really really have to do everything in strict order for the whole of the project. Just having requirements and a specification isn't necessarily waterfall. Almost all agile processes at least have requirements, the more formal ones also do have specifications. You just do it more than once in a project, like once per sprint, story or whatever. | | |
| ▲ | Ma8ee 6 days ago | parent [-] | | Waterfall certainly has processes for going back and adjusting previous steps after learning things later in the process. The design was updated if something didn’t work out during implementation, and of course implementation was changed after errors was found during testing. Now that agile practitioners have learned that requirements and upfront design actually is helpful, the only difference seems to be that the loops are tighter. That might not have been possible earlier without proper version control, without automated tests, and the software being delivered on solid media. A tight feedback loop is harder when someone has to travel to your customer and sit down at their machines to do any updates. |
|
|
| |
| ▲ | lwhi 7 days ago | parent | prev [-] | | That thinking and understanding can be done before coding begins, but I think we need to understand the potential implementation layer well in order to spec the product or service in the first place. My feeling is that software developers will need end up working this type of technical consultant role once LLM dominance has been universally accepted. |
|
|
| |
| ▲ | pjc50 7 days ago | parent | next [-] | | Apart from various C UB fiascos, the compiler is neither a black box nor magic, and most of the worthwhile ones are even determinstic. | | |
| ▲ | viralpraxis 7 days ago | parent [-] | | I’m sorry for an off-topic, are there any non-determenistic compilers you can name? I’d been wondering for a while if they actually exist | | |
| ▲ | pjc50 7 days ago | parent [-] | | Accidental non-deterministic compilers are fairly easy if you use sort algorithms and containers that aren't "stable". You then can get situations where OS page allocation and things like different filenames give different output. This is why "deterministic build" wasn't just the default. Actual randomness is used in FPGA and ASIC compilers which use simulated annealing for layout. Sometimes the tools let you set the seed. |
|
| |
| ▲ | dpoloncsak 7 days ago | parent | prev [-] | | I think you're misunderstanding.
AI is not a black-box, and neither is a compiler. We(as a species) know how they work, and what they do. The 'black-boxes' are the theoretical systems non-technical users are building via 'vibe-coding'. When your LLM says we need to spin up an EC2 instance, users will spin one up. Is it configured? Why is it configured that way? Do you really need a VPS instead of a Pi? These are questions the users, who are building these systems, won't have answers to. | | |
| ▲ | drdeca 6 days ago | parent | next [-] | | If there are cryptographically secure program obfuscation (in the sense of indistinguishability obfuscation) methods, and someone writes some program, applies the obfuscation method to it, publishes the result, deletes the original version of the program, and then dies, would you say that humanity "knows how the (obfuscated) program works, and what it does"? Assume that the obfuscation method is well understood. When people do interpretabililty work on some NN, they often learn something. What is it that they learn, if not something about how the works? Of course, we(meaning, humanity) understand the architecture of the NNs we make, and we understand the training methods. Similarly, if we have the output of an indistinguishability obfuscation method applied to a program, we understand what the individual logic gates do, and we understand that the obfuscated program was a result of applying an indistinguishability obfuscation method to some other program (analogous to understanding the training methods). So, like, yeah, there are definitely senses in which we understand some of "how it works", and some of "what it does", but I wouldn't say of the obfuscated program "We understand how it works and what it does.". (It is apparently unknown whether there are any secure indistinguishability obfuscation methods, so maybe you believe that there are none, and in that case maybe you could argue that the hypothetical is impossible, and therefore the argument is unconvincing? I don't think that would make sense though, because I think the argument still makes sense as a counterfactual even if there are no cryprographically secure indistinguishability obfuscation methods. [EDIT: Apparently it has in the last ~5 years been shown, under relatively standard cryptographic assumptions, that there are indistinguishability obfuscation methods after all.]) | |
| ▲ | mr_toad 7 days ago | parent | prev [-] | | > AI is not a black-box Any worthwhile AI is non-linear, and it’s output is not able to be predicted (if it was, we’d just use the predictor). |
|
|