| ▲ | butILoveLife 3 hours ago |
| >Good software is about creating longer term value and takes consistent skill & vision to execute. >Those software engineers who focus on this big picture thinking are going to be more valuable than ever. Not to rain on our hopes, but AI can give us some options and we can pick the best. I think this eliminates all middle level positions. Newbies are low cost and make decisions that are low stakes. The most senior or seniors can make 30 major decisions per day when AI lays them out. I own a software shop and my hires have been: Interns and people with the specific skill of my industry(Mechanical engineers). 2 years ago, I hired experienced programmers. Now I turn my mechanical engineers into programmers. |
|
| ▲ | zaphar 3 hours ago | parent | next [-] |
| So what you are a saying is that you removed the people who can make the decisions that keep your software maintainable and kept the people who will slowly over time cause your software to become less maintainable? I'm not sure that tradeoff is a a good one. |
| |
| ▲ | butILoveLife 3 hours ago | parent [-] | | This might have been true pre-agent AI programming, but honestly the code seems better than ever. It finds edge cases better than me. I know... I know buddy. The world changed and I don't know if I'm going to have a job. | | |
| ▲ | zaphar an hour ago | parent | next [-] | | I'm every bit as immersed in this as you are. I've been developing my own custom claude code plugins that allow me to delegate more and more the agents. But the one thing the agent is not reliably doing for me is making sound architectural choices and maintaining long term business context and how it intersects with those architectural choices. I tried teaching all of that in system prompts and documentation and it blows the context window to an unusable size. As such the things that as a high level experienced senior engineer I have been expected to do pre-agents I am still expected to do. If you are eliminating those people from your business then I don't know that I can ever trust the software your company produces and thus how I could ever trust you. | | |
| ▲ | aspenmartin 32 minutes ago | parent [-] | | > making sound architectural choices and maintaining long term business context and how it intersects with those architectural choices. I completely agree with you, but this is rapidly becoming less and less the case, and would not at all surprise me if even by the end of this year its just barely relevant anymore. > If you are eliminating those people from your business then I don't know that I can ever trust the software your company produces and thus how I could ever trust you. I mean thats totally fine, but do realize many common load bearing enterprise and consumer software products are a tower of legacy tech debt and junior engineers writing terrible abstractions. I don't think this "well how am I going to trust you" from (probably rightfully) concerned senior SWEs is going to change anything.
s |
| |
| ▲ | daveguy 2 hours ago | parent | prev [-] | | Finding edge cases is completely orthogonal to creating maintainable software. Finding edge cases ~= identifying test suites. Making software maintainable ~= minimizing future cost of effective changes. Ignoring future maintenance cost because test suites are easier to create seems like disjointed logic. | | |
| ▲ | butILoveLife 2 hours ago | parent [-] | | Im not even sure we will need maintain software. I can basically have AI rewrite entire code bases in an hour including testing. Have you use AI Agents? Specifically with SOTA models like Opus. I talked like you 3 weeks ago. But the world changed. | | |
| ▲ | switchbak 2 hours ago | parent | next [-] | | "Im not even sure we will need maintain software" (sic) - I'm not sure what your specific background is, but with a statement like that you lose all legitimacy to me. | | |
| ▲ | aspenmartin 35 minutes ago | parent [-] | | Writings on the wall, it is true, tech debt will no longer be a thing to care about. "but who will maintain it?" massive massive question, rapidly becoming completely irrelevant "but who will review it?" humans sure, with the assistance of ai, writing is also on the wall: AI will soon become more adept at code review than any human I can understand "losing all legitimacy" being a thing, but to me that is an obvious knee jerk reaction to someone who is not quite understanding how this trend curve is going. |
| |
| ▲ | hobs 2 hours ago | parent | prev [-] | | And the human downstream of this random reorganization of things at will, how do they manage it? If its AI agents all the way down its commoditization all the way down, if humans have to deal with it there's some sort of cost for change even if its 0 for code. |
|
|
|
|
|
| ▲ | AlotOfReading 3 hours ago | parent | prev | next [-] |
| Not to rain on our hopes, but AI can give us some options and we can pick the best.
a.k.a. greedy algorithms, a subject those of us on HN should be well-acquainted with. You can watch the horizon effect frequently play out in corporate decisionmaking. |
|
| ▲ | gtowey 3 hours ago | parent | prev | next [-] |
| > Not to rain on our hopes, but AI can give us some options and we can pick the best. But that's kind of my point. A bunch of decisions like that tend to end up with a "random walk" effect. It's a bunch of tactical choices which don't add up to something strategic. It could be, but it takes the human in the loop to hold onto that overall strategy. |
|
| ▲ | hobs 2 hours ago | parent | prev [-] |
| AI often simply does not offer the best options, does not think strategically, and if you constrained to its choices you will often make silly mistakes. This is why all the arguments about context windows and RAG exist, because at the end of the day even if you asked the question of a human with all the context there's such a thing as opinions, stated vs unstated goals, requirements vs non functional requirements, etc which will give you wildly different answers. Most of the time people don't even know the questions they want to ask. |