Remix.run Logo
toprerules 2 days ago

People are absolutely insane with their takes on AI replacement theory. The complexity of our stacks has grown exponentially since the 70s. Very few people actually comprehend how many layers of indirection, performance, caching, etc. are between their CRUD web app and bare metal these days.

AI is going to increase the rate of complexity 10 fold by spitting out enormous amounts of code. This is where the job market is for developers. Unless you 100% solve the problem of feeding every single third party monitoring tool, logging, compiler output, system stats down to the temperature of RAM, and then make it actually understand how to fix said enormous system (it can't do this even if you did give it the context by the way), then AI will only increase the amount of engineers you need.

hn_throwaway_99 2 days ago | parent | next [-]

> AI is going to increase the rate of complexity 10 fold by spitting out enormous amounts of code.

This is true, and I am (sadly, I'd say) guilty of it. In the past, for example, I'd be much more wary about having too much duplication. I was working on a Go project where I needed to have multiple levels of object mapping (e.g. entity objects to DTOs, etc.), and with LLMs it just spit out the answer in seconds (correct I'd add), even though it was lots and lots of code where in the past I would have written a more generic solution to prevent me from having to write so much boilerplate.

I see where the evolution of coding is going, and as a late middle aged developer it has made me look for the exits. I don't disagree with the business rationale of the direction, and I certainly have found a ton of value in AI (e.g. I think it makes learning a new language a lot easier). But I think it makes programming so much less enjoyable for me personally. I feel like it's transformed the job more to "editor" from "author", and for me, the nitty gritty details of programming were fun.

Note I'm not making any broad statement about the profession generally, I'm just stating with some sadness that I don't enjoy where the day-to-day of programming is heading, and I just feel lucky that I've saved up enough in the earlier part of my career to get out now.

aerhardt a day ago | parent | next [-]

I don't always programming in the small, and still feel that AIs provide plenty of chance for architecture, design, refactoring. For me it's been an absolute boon, I'm enjoying build more than ever. At any rate it's undeniably transformative and I can see many people not enjoying the end state.

outside1234 2 days ago | parent | prev [-]

Really? I sort of feel the opposite. I am a mid-career as well and HIGHLY TIRED of writing yet another set of boilerplate to do a thing or chase down some syntax error in the code and the fact that AI will now do this for me has given me a lot more energy to focus on the higher level thinking about how it all fits together.

codr7 a day ago | parent | next [-]

So instead of being creative and finding ways to avoid duplication, you look for a way to make copies faster.

That's one way to solve the problem.

Not the way I'm looking for when hiring.

JohnBooty a day ago | parent | next [-]

    So instead of being creative and finding ways to avoid 
    duplication, you look for a way to make copies faster.
That's not at all how I read the parent post. It feels more like you're replying to a hybrid of the grandparent post (person who churned out a lot of duplicated code with AI) and the parent post (person who likes being "editor" and finds AI helpful)
AnimalMuppet a day ago | parent | prev [-]

This has happened before.

When we went from assembler to compilers, this logic would have said, "So instead of being creative and finding ways to avoid hand-coding loops, you look for a way to spit out copies of loops faster." And the answer is, "Yeah, I do! I want the compiler to write the loop for me and get all the jumps right so I don't have to worry about it! Getting the jumps for the loop right is incidental to what I'm actually trying to write; I've got better things to spend my time on."

Note well: I am not arguing that AI will produce good code rather than multiple layers of garbage. I am merely saying that this particular argument is weak.

codr7 a day ago | parent | next [-]

You're comparing an LLM to a compiler, which doesn't make much sense to me.

If my compiler occasionally output the recipe for vegan pancakes instead of working code I would definitely think differently of it.

AnimalMuppet a day ago | parent [-]

I'm comparing an LLM to a compiler to the degree that they automate much of the writing of the tedious parts of code, rather than finding ways to reduce the amount of such code written (and therefore to the degree warranted by the argument in your previous post).

I will admit that compilers don't hallucinate much.

xigoi a day ago | parent | prev | next [-]

The difference is that a high-level programming language abstracts away the duplication, whereas an LLM does not.

player1234 21 hours ago | parent | prev [-]

Comparing current AI to a compiler is a dogwhistle for white supremacy.

hn_throwaway_99 a day ago | parent | prev [-]

Right now you're getting downvoted, but I don't disagree with you. It's not hard for me to see how lots of people like how AI helps them code (again, I find it helpful in tons of areas), so I think it's more of a personal preference kind of thing. It's also probably that I'm older (nearing 50), and I think there is a lot of good research that a fundamental shift happens in most people's brains in their 40s that makes it more difficult to make major shifts to new ways of doing things (and I've found that in myself).

I think the only thing that perhaps I don't totally agree with is the idea that AI just lets you focus on a higher level of thinking while it "takes care of the details". AI is still the leakiest of abstractions, and while coding LLMs have gotten much better over the past 2 years I can't just trust it, so I still have to review every line that goes to prod. I just find that task much less enjoyable ("editing") than being the author of code. And heck, I'm someone that really enjoys doing code reviews. I think with code reviews my mind is in a state that I'm helping to mentor another human, and I love that aspect of it. I'm not so energetic about helping to train our robot overlords.

voidhorse 2 days ago | parent | prev | next [-]

I do not look forward to the amount of incompetence and noise that increasing adoption of these tools will usher in. I've already had to deal with a codebase in which it was clear that the author fundamentally misunderstood what a trie data structure was. I was also having an difficult time trying to talk to them about the implementation and their misconceptions. lo and behold I eventually find out the reason they chose this data structure was because they asked ChatGPT what to do and they never actually understood, conceptually, what they were doing or using. This made the whole engagement with the code and process of fixing things way harder. Not only did I now have to fix the bunk code, I also had to spend significant time disabusing the author of their own misunderstandings...

CGamesPlay 2 days ago | parent | next [-]

So, AI created a job opportunity for you?

voidhorse 2 days ago | parent | next [-]

I suppose that's one way to look at it. But it's a sort of "bs" unproductive job, fixing up poor outcomes, and overall a less efficient scenario that experts doing it right in the first place. Worse, there was already a readily available implementation that could have been used here rather than a hand-rolled, half-baked AI output. In that respect, the code itself was pure noise and the whole activity was predominantly a waste of my time.

gregw2 a day ago | parent | next [-]

Sounds like outsourcing/offshoring!

marcosdumay a day ago | parent | prev [-]

> But it's a sort of "bs" unproductive job, fixing up poor outcomes, and overall a less efficient scenario that experts doing it right in the first place.

I expect this theme to repeat all the time from now on.

And also I expect it to crimp the growth of several people, because the AI solves the simplest problems and then they face an insurmountable wall trying to learn every concept at the same time when they need a small increment in code realism.

Software development will probably become extremely profitable for the people that can do it properly on the next couple of decades.

zamalek 2 days ago | parent | prev [-]

The problem is the asinine interviews we are going to have to tolerate in order to screen against AI-kiddies. You think HackerRank is bad? Just you wait...

askonomm a day ago | parent [-]

Here's me hoping that hiring managers / HR will finally start actually calling references and/or checking open source contributions. I have plenty of both, and in my 14 year career have had only 2 companies call my references, 1 check my open source work. So far they all just give me some bs test job to prove that I can do basic CRUD programming despite being a senior engineer, over and over and over again, which could all be avoided with just a phone call and a basic conversation about tech stuff with their technical team ... but they are either lazy, incompetent, or a combination of both, so I get a bs test job - sometimes even before I manage to actually get the first interview. I decline all of these, of course, but there's so many and it has made job searching quite difficult.

codr7 a day ago | parent [-]

You had someone check your work?

That's awesome!

Never happened so far in my 26 year career. It' almost as if they're hiring for something else than solving problems and writing code. Following orders, most likely.

askonomm a day ago | parent [-]

To be fair I figure it was only because I applied for a Clojure job, which is a pretty niche thing, and so perhaps that attracts people who are actually interested in you as a human being and not just a number in a spreadsheet. Since then I've gone back to mainstream and it has not happened again.

itsoktocry 20 hours ago | parent | prev | next [-]

>Not only did I now have to fix the bunk code, I also had to spend significant time disabusing the author of their own misunderstandings...

People with your attitude will be the first to be replaced.

Not because you code isn't as good as an AI; maybe it's even better. But because your personality makes you a bad teammate.

saaaaaam 2 days ago | parent | prev [-]

That’s called consultancy and you can bill chunky rates by the hour. You should be rubbing your hands with glee!

And then work out how to do code review and fixing using AI, lightly supervised by you so that you can do it all whilst walking the dog or playing croquet or something.

perrygeo a day ago | parent | prev | next [-]

I've yet to see an LLM response or an LLM generated diff that suggests removing or refactoring code. Every AI solution is additive; new functions, new abstractions added in every step. Increased complexity is all but baked into the system.

Software engineering jobs involve working in a much wider solution space - writing new code is but one intervention among many. I hope the people blindly following LLM advice realize their lack of attention to detail and "throw new code at it" attitude comes across as ignorant and foolish, not hyper-productive.

4b11b4 a day ago | parent | next [-]

Ask for a refactor...

Ask for multiple refactors and their trade-offs

outside1234 14 hours ago | parent | prev [-]

Add it as an option when you ask it and see what happens

perrygeo 11 hours ago | parent [-]

And how would a new developer know to ask for a refactor? That's my point.

plagiarist 2 days ago | parent | prev | next [-]

I agree they cannot handle a complex codebase at all at this moment in time.

But I think I would rather just end my career instead of transitioning into fixing enormous codebases written by LLMs.

JTyQZSnP3cQGa8B a day ago | parent | prev | next [-]

The complexity has grown but not the quality. We went from writing ADA code with contracts and all sorts of protections with well thought architectures, to random crap written in ReactJS in web sites that now weigh more than a full install of Windows 95.

I’m really ashamed of what SWE has become and AI will increase that tenfold as you say. We shouldn’t cheer up on that, especially if I will have to debug all that crap.

And if it increases the number of engineers, they won’t be good due to a lack of education (I already experience this at work). But anyway I don’t believe it, managers will not waste more money on us, that would go against modern capitalism.

toprerules a day ago | parent [-]

Oh yes, I'm with you. I didn't say I liked it. I am a low level munger and I like it that way - the lowest layers/oldest layers of the stack tend to be the pieces that are well written and stand the test of time. Where I see AI hitting is at the upper, devil may care, layers of application stack that will be an absolutely hellscape to deal with as a competent engineer.

bigbones 2 days ago | parent | prev | next [-]

I expect pretty much the opposite to happen: it makes sense for languages, stacks and interfaces to become more amenable to interfacing with AI. If a machine can act more reliably by simplifying its inputs at a fraction of the cost of the equivalent human labour, the system has always adjusted to accommodate the machine.

The most obvious example of this already happening is in how function calling interfaces are defined for existing models. It's not hard to imagine that principle applied more generally, until human intervention to get a desired result is the exception rather than the rule as it is today.

I spent most of the past 2 years in "AI cope" mode and wouldn't consider myself a maximalist, but it's impossible not to see already from the nascent tooling we have that workflow automation is going to improve at a rapid and steady rate for the foreseeable future.

JumpCrisscross 2 days ago | parent | next [-]

> it makes sense for languages, stacks and interfaces to become more amenable to interfacing with AI

The theoretical advance we're waiting for in LLMs is auditable determinism. Basically, the ability to take a set of prompts and have a model recreate what it did before.

At that point, the utility of human-readable computer languages sort of goes out the door. The AI prompts become the human-readable code, the model becomes the interpreter and it eventually, ideally, speaks directly to the CPUs' control units.

This is still years--possibly decades--away. But I agree that we'll see computer languages evolving towards auditability by non-programmers and reliabibility in parsing by AI.

SkiFire13 2 days ago | parent | next [-]

> The theoretical advance we're waiting for in LLMs is auditable determinism.

Non-determinism in LLMs is currently a feature and introduced consciously. Even if it wasn't, you would have to lock yourself on a specific model, since any future update would necessarily be a possibly breaking change.

> At that point, the utility of human-readable computer languages sort of goes out the door.

Its utility is having a non-ambiguous language to describe your solution in and that you can audit for correctness. You'll never get this with an LLM because its very premise is using natural language, which is ambiguous.

JumpCrisscross 2 days ago | parent [-]

> Non-determinism in LLMs is currently a feature and introduced consciously. Even if it wasn't, you would have to lock yourself on a specific model, since any future update would necessarily be a possibly breaking change

What I'm suggesting is a way to lock the model and then be able to have it revert to that state to re-interpret a set of prompts deterministically. When exploring, it can still branch non-deterministically. But once you've found a solution that works, you want the degrees of freedom to be limited.

> You'll never get this with an LLM because its very premise is using natural language, which is ambiguous

That's the point of locking the model. You need the prompts and the interpreter.

SkiFire13 a day ago | parent [-]

> That's the point of locking the model. You need the prompts and the interpreter.

This still doesn't seem to work for me:

- even after locking the LLM state you still need to understand how it processes your input, which is a task nobody has been able to do yet. Even worse, this can only happen after locking it, so it needs to be done for every project.

- the prompt is still ambiguous, so either you need to refine it to the point it becomes more similar to a programming language or you need an unlimited set of rules for how it should be disambiguated, which an auditor needs to learn. This makes the job of the auditor much harder and error prone.

bigbones 2 days ago | parent | prev [-]

> The theoretical advance we're waiting for in LLMs is auditable determinism

I think this is a manifestation of machine thinking - the majority of buyers and users of software rarely ask for or need this level of perfection. Noise is everywhere in the natural environment, and I expect it to be everywhere in the future of computing too.

JumpCrisscross 2 days ago | parent [-]

> the majority of buyers and users of software rarely ask for or need this level of perfection

You're right. Maybe just reliable replicability, then.

The core point is, the next step is the LLM talking directly to the control unit. No human-readable code in between. The prompts are the code.

toprerules 2 days ago | parent | prev | next [-]

You're missing the point, there are specific reasons why these stacks have grown in complexity - even if you introduce "API for AI interface" as a requirement, you still have to balance that with performance, reliability, interfacing with other systems, and providing all of the information necessary to debug when AI gets it wrong. All of the same things that humans need apply to AI - the claim for AI isn't that it deterministically solve every problem it can comprehend.

So now we're looking at a good several decades of us even getting our human interfacing systems to amend themselves to AI will still requiring all the current complexity they already have. The end result is more complexity not less.

2 days ago | parent | next [-]
[deleted]
bigbones 2 days ago | parent | prev [-]

Based on what I've seen so far, I'm thinking a timeline more like 5-10 years where anything involving at least frontend has all but evaporated. What value is there in having a giant app team grind for 2 years on the perfect Android app when a user can simply ask for the display they want, and 5 variants of it until they are happy, all in a couple of seconds while sitting in the back of a car. What happens to all the hundreds of UI frameworks when a system as a widespread as Android adopts a technology approach like this?

Backend is significantly murkier, there are many tasks it seems unlikely an AI will accomplish any time soon (my toy example so far is inventing and finalizing the next video compression standard). But a lot of the complexity in backend derives from supporting human teams with human styles of work, and only exists due to the steady cashflow generated by organizations extracting tremendous premiums to solve problems in their particular style. I have no good way to explain this - what value is a $500 accounting system backend if models get good enough at reliably spitting out bespoke $15 systems with infinite customizations in a few seconds for a non-developer user, and what of all the technologies whose maintenance was supported by the cashflows generated by that $500 system?

tehjoker 2 days ago | parent [-]

These don't sound like the kinds of problems programmers solve. Users don't want to customize their UI (well some do), they want a UI that is adapted to their needs. They only want to customize it when it doesn't meet their needs (for example, if a corporation tries to use addicting features or hide things they need to increase engagement).

Accounting software has to be validated, and part of the appeal is that it simplified and consolidates workflows across huge bureaucracies. I don't see how on earth you can just spit one out from a prompt and expect that to replace anything.

I work on a compression algorithm myself, and I've found AI of limited utility. It does help me translate things for interfacing between languages and it can sometimes help me try out ideas, but I have to write almost everything myself.

EDIT: It is true, that lower skilled jobs are going to change or reduce in quantity in the short term. To a certain degree there might be a Jevon's paradox in terms of code quantity that needs management.

Imagine companies churning out tons and tons of code that no one understands that behaves bizzarely. Maybe it will become a boutique thing for companies to have code that works properly and people will just accept broken user interfaces or whatever so long as there are workarounds.

dmix 2 days ago | parent | prev [-]

I wonder if AI is going to reduce the amount of JS UIs. AI bots can navigate simple HTML forms much easier than crazy React code with 10 layers of divs for a single input. It's either that or people create APIs for everything and document how they are related and interact with documentation.

baq 2 days ago | parent [-]

Claude is so good at react the amount of UIs will increase.

theGnuMe 2 days ago | parent | prev [-]

I wonder if anyone is applying AI to cobol…

Blackthorn 2 days ago | parent | next [-]

There's no readily available Stack Overflow answers for Cobol, so it'll do about as good there as it does digital signal processing.

stray 2 days ago | parent | prev [-]

I think IBM is using LLM to rewrite COBOL code into Java.