| ▲ | 1970-01-01 3 days ago |
| We have hard evidence of it becoming easier every damn day. AI is taking these jobs. The models aren't perfect, but the speed tradeoff is so massive that you really can't say it's "hard" to build anything anymore. Nobody is lying. |
|
| ▲ | contagiousflow 6 hours ago | parent | next [-] |
| What is the hard evidence? Edit: What I mean by this is there may be some circumstantial evidence (less hiring for juniors, more AI companies getting VC funding). We currently have no _hard_ evidence that programming has had a substantial speed increase/deskilling from LLMs yet. Any actual __science__ on this has yet to show this. But please, if you have _hard_ evidence on this topic I would love to see it. |
| |
| ▲ | fullshark 5 hours ago | parent [-] | | Closest I guess is hiring of juniors is down, but it's possibly just due to a post COVID pullback being credited to AI. I definitely think a lot of junior tasks are being replaced with AI, and companies are deciding it's not worth filling junior roles at least temporarily as a result. | | |
| ▲ | datsci_est_2015 5 hours ago | parent | next [-] | | I don’t think this is unique to software. Across the US over the past decades there’s been a massive contraction in companies being willing to “train-up” employees. It’s greedy, and it works for their bottom lines. But it’s a tragedy of the commons and a race to the bottom. It also explains the dearth of opportunities for getting into the trades, despite sky-high demand. If anything, the expectations for an individual developer have never been higher, and now you’re not getting any 22-26 year olds with enough software experience to be anything but a drain on resources when the demand for profitability is yesterday. Maybe we need to go back to ZIRP if only to get some juniors back on to the training schedule, across all industries. For other insanely toxic and maladaptive training situations, also see: medicine in the US. | |
| ▲ | chasd00 5 hours ago | parent | prev | next [-] | | > I definitely think a lot of junior tasks are being replaced with AI I think team expansion is being reduced as well. If you took a dev team of 5, armed them all with Claude Code + training on where to use it and where not to I think you could get the same productivity as hiring 2 additional FTE software devs. I'm assuming your existing 5 devs fully adopt the tool and not reject it like a bad organ transplant. Maybe an analogy could be the invention of email reducing the need for corporate typing pools and therefore fewer jr. secretaries ( typists) are hired. /i'm just guessing that being a secretary is in the career progression path of someone in the typing pool but you get the idea. edit: one thing i missed in my email analogy is that when email was invented it was free and available to anyone that could set up sendmail/*.MTA | |
| ▲ | chasd00 5 hours ago | parent | prev [-] | | > I definitely think a lot of junior tasks are being replaced with AI one last thing to point out then my lunch is over. I think AI coding agents are going to hit services/marketplaces like Fiverr especially hard. I think the AI agents are the new gig-economy with respect to code, I spent about $50 on Claude Code pay-as-you-go over the past 3 days to put together a website i've had in the back of my mind for months. Claude Code got it to a point where I can easily pick up and run with to finish it out over a few more nights/weekends. UI/UX is especially tedious for me and Claude Code was able to take my vague descriptions and make the interface nicely organized and contemporary. The architecture is perfectly reasonable for what i want to do ( Auth0 + react + python(flask) + postgres + an OAuth2 integration to a third party ). It got all of that about 95% right on the first try.. for $50!. Services/marketplaces like Fiverr have to be thinking really hard right now. |
|
|
|
| ▲ | xmprt 5 hours ago | parent | prev | next [-] |
| If you think of building software as just writing the code then sure AI makes things a lot easier. But if software engineering also includes security, setting up and maintaining infrastructure, choosing the right tradeoffs, understanding how to deal with evolving requirements without ballooning code complexity, etc., then AI struggles with that at the moment. |
| |
| ▲ | fastball 5 hours ago | parent [-] | | With infra-as-code, an LLM can also set up and maintain infra. Security is another issue and 100% that still seems to be the biggest footgun with agentic software development, but honestly that is mostly just a prompting/context issue. You can definitely get an LLM to write secure code, it is just arguably not any model's "default". | | |
| ▲ | omnimus 5 hours ago | parent | next [-] | | The problem is not if the LLM writes secure code. The problem is if you can know and understand that the code is reasonably secure. And that requires pretty deep understanding of the program and that understanding is (for most people) built by developing the program. I am not sure how it's for others byt for me it's a lot harder to read chunk of code to understand and verify it than to take the problem head on with code and then maybe consult it using LLM. | |
| ▲ | chasd00 5 hours ago | parent | prev [-] | | I think the industry is going to end up with exceptional software engineers organizing and managing many average coding assistants. The problem is the vast majority of us are not exceptional software engineers (obviously). |
|
|
|
| ▲ | themafia 4 hours ago | parent | prev [-] |
| > but the speed tradeoff If you only care about a single metric you can convince yourself to make all kinds of bad decisions. > Nobody is lying. Nobody is being honest either. That happens all the time. |