| ▲ | stingraycharles 7 hours ago |
| Seems to me that - optimistically - this would shift the job of a software engineer into a more formal engineering role, and that the actual implementation is done by AI. In the same way in other areas, engineering and implementation differ and implementation can be (and is) automated. No idea how this should take form, though, and if it’s even realistic. But it seems like due to AI, formal specs and all kinds of “old school” techniques are having a renaissance while we figure out how to distribute load between people and AI. |
|
| ▲ | torginus 13 minutes ago | parent | next [-] |
| Personally my experience has been that once I manage to describe a problem in good enough detail that a junior engineer would be able to solve it, it's good enough for an LLM as well. Which creates incentives I'm not wholly comfortable with, but the fact is that I'm more productive now alone, than I used to be in a team. |
|
| ▲ | ted_dunning 7 hours ago | parent | prev | next [-] |
| That sounds right, but it can be superbly wrong because that presupposes that you can debug what the AI gets very confidently wrong. There are three legs to the stool: specification, implementation, and verification. Implementation and verification both take low-level knowledge and sophisticated knowledge of how things break. |
| |
| ▲ | adrian_b 6 hours ago | parent [-] | | Indeed, even if were possible for someone to create any program most of the time just by directing a team of AI agents, when something does not work one needs the ability to zoom in through the abstraction levels and understand exactly the program that is executed, so only knowing to generate prompts becomes insufficient. This is the same with compilers. Most of the time a programmer needs to know only the high-level language that is used for writing the program. Nevertheless, when there is a subtle bug or just the desired performance cannot be reached, a programmer who also understands the machine language of the processor has a great advantage by being able to solve the bug or the performance problem, which without such knowledge would be solved in much more time or never. | | |
| ▲ | SleepyMyroslav 4 hours ago | parent | next [-] | | I don't think compilers are a good example. The economics of software development has won a long time ago. For example in Gamedev with well known soft real-time requirements people (mostly) stopped doing that machine code dance many hardware generations ago. Like it happened with memory optimizations: people measure memory in GB now not in KB =) I am sure programmers cherish every case when they can do micro optimization but in the retrospect the high level cuts is what made the system fit the perf or memory budget. | |
| ▲ | don_esteban 3 hours ago | parent | prev [-] | | 1) luckily, nowadays compiler's bugs surface very rarely, as the average programmer does not have capability to solve such issues 2) unfortunately, LLM's, by their very nature (not having a model of what they do, are prone to introducing subtle bugs, i.e. it is like programming in high-level language whose compiler likes to wing it |
|
|
|
| ▲ | cucumber3732842 2 hours ago | parent | prev [-] |
| > this would shift the job of a software engineer into a more formal engineering role If only you knew how the civil engineering sausage was made. The amount of yolo'ing stuff based on vibes goes up when testing is expensive/impractical. They just paper over it all with disclaimers of the sort that would get laughed at for being non-starters in the software industry. |