Remix.run Logo
Labor market impacts of AI: A new measure and early evidence(anthropic.com)
217 points by jjwiseman 13 hours ago | 307 comments
throwaw12 6 hours ago | parent | next [-]

People who are saying they're not seeing productivity boost, can you please share where is it failing?

Because, I am terrified by the output I am getting while working on huge legacy codebases, it works. I described one of my workflow changes here: https://news.ycombinator.com/item?id=47271168 but in general compared to old way of working I am saving half of the steps consistently, whether its researching the codebase, or integrating new things, or even making fixes. I have stopped writing code, occasionally I jump into the changes proposed by LLM and make manual edits if it is feasible, otherwise I revert changes and ask it to generate again but based on my learnings from the past rejected output

I am terrified about what's coming

yoyohello13 5 hours ago | parent | next [-]

The companies laying off people have no vision. My company is a successful not for profit and we are hiring like crazy. It’s not a software company, but we have always effectively unlimited work. Why would anyone downsize because work is getting done faster? Just do more work, get more done, get better than the competition, get better at delivering your vision. We put profits back in the community and actually make life better for people. What a crazy fucking concept right?

tkgally 4 hours ago | parent | next [-]

I suspect it depends partly on how locked each individual is into a particular type of work, both skill-wise and temperamentally.

To give an example from a field where LLMs started causing employment worries earlier than software development: translation. Some translators made their living doing the equivalent of routine, repetitive coding tasks: translating patents, manuals, text strings for localized software, etc. Some of that work was already threatened by pre-LLM machine translation, despite its poor quality; context-aware LLMs have pretty much taken over the rest. Translators who were specialized in that type of work and too old or inflexible to move into other areas were hurt badly.

The potential demand for translation between languages has always been immense, and until the past few years only a tiny portion of that demand was being met. Now that translation is practically free, much more of that demand is being met, though not always well. Few people using an app or browser extension to translate between languages have much sense of what makes a good translation or of how translation can go bad. Professional translators who are able to apply their higher-level knowledge and language skills to facilitate intercultural communication in various ways can still make good money. But it requires a mindset change that can be difficult.

adelie 2 hours ago | parent [-]

I'm not in translation, but a number of close friends are in the industry. Two trends I've noticed in the industry, which I think we're seeing mirrored in tech:

1. No one cares about quality. Even in fields you'd expect to require the 'human touch' (e.g. novel translation), publishers are replacing translators with AI. It doesn't matter if you have higher-level knowledge or skills if the company gains more from cutting your contract than it loses in sales.

2. Translation jobs have been replaced with jobs proofreading machine translations, which pays peanuts (since AI is 'doing most of the work') but in fact takes almost as much effort as translating from scratch (since AI is often wrong in very subtle ways). The comparison to PR reviews makes itself.

afro88 5 hours ago | parent | prev | next [-]

This is exactly right IMO. I have never worked for a company where the bottleneck was "we've run out of things to do". That said, plenty of companies run out of actual software engineering work when their product isn't competitive. But it usually isn't competitive because they haven't been able to move fast enough

RA_Fisher 2 hours ago | parent | prev | next [-]

Does that extra work bring in more revenue? I think that’s the key question.

raphaelj an hour ago | parent [-]

Companies that do not reduce their workforce might outcompete you.

It might not be about bringing more revenues but retaining market share.

throw3847r7 3 hours ago | parent | prev | next [-]

You need certain company culture, to be able to scale up, and to capture this value. Most companies can not just add new developers.

AI needs documentation, automation, integration tests... It works very well for remote first company, but not for in-face informal grinding approach.

Just year ago, client told me to delete integration tests, because "they ran too long"!

joe_mamba 2 hours ago | parent [-]

>Just year ago, client told me to delete integration tests, because "they ran too long"!

Why are you surprised customers don't like spending money on the items that don't add business value. Add to that QA, documentation, security audits, etc.

They want to ship stuff that brings in customers and revenue day one, everything else is a cost.

ehnto 5 hours ago | parent | prev | next [-]

That was my insight also. As a manager, you already have the headcount approved, and your people just allegedly got some significant percentage more productive. The first thought shouldn't be, great let's cut costs, it should be great now we finally have the bandwidth to deliver faster.

On a macro level, if you were in a rising economic tide, you would still be hiring, and turning those productivity gains into more business.

I wonder what the parallels are to past automations. When part producing companies moved from manual mills to CNC mills, did they fire a bunch of people or did they make more parts?

superfrank 3 hours ago | parent [-]

I'm an EM as well and I've been telling my teams for a while now that I think they really only need to start worrying once our backlog starts going down instead of up. Generally, I still agree with that (and your) sentiment when you look at the long term, but in the short term, I think all of the following arguments can be made in favor of layoffs:

- AI tools are expensive so until the increased productivity translates to increased revenue we need to make room in the budget

- We expect the bottlenecks in our org to move from writing code to something else (PM or design or something) so we're cutting SWEs in anticipation of needing to move that budget elsewhere.

- We anticipate the skillsets needed by developers in the AI world to be fundamentally different from what they are now that it's cheaper to just lay people off, run as lean as possible, and rehire people with the skills we want in a year or two than it is to try and retrain.

I don't necessarily agree with those arguments (especially the last one), but I think they're somewhat valid arguments

throwaw12 3 hours ago | parent [-]

I see similar arguments and I don't agree as well, here is why:

> rehire people with the skills we want in a year or two than it is to try and retrain.

before that future comes your company might become obsolete already, because you have lost your market share to new entrants

> We expect the bottlenecks in our org to move from writing code to something else

I would love to tell them, hey lets leverage current momentum and build, when those times come, we offer existing people with accumulated knowledge to retrain to a new type of work, if they think they're not good fit, they can leave, if they're willing, give them a chance, invest in people, make them feel safe and earn trust and loyalty from them

> AI tools are expensive so until the increased productivity translates to increased revenue we need to make room in the budget

1. Its not that expensive: 150$/seat/month -> 5 lunches? or maybe squeeze it from Sales personnel traveling with Business class?

2. By the time increased productivity is realized by others, company who resisted could be so far behind, that they won't be able to afford hiring engineers with those skillsets, if they think 150$ is expensive now, I am sure they will say "What??? 350k$ for this engineer?, no way, I will instead hire contractors"

arwhatever 4 hours ago | parent | prev | next [-]

I’ve been screaming this too https://news.ycombinator.com/item?id=47212237

It’s refreshing to see the same sentiment from so many other people independently here.

crocowhile 4 hours ago | parent | prev | next [-]

Because hiring less while getting more done increases margins. Your company is not for profit so doesnt care about margins. Others do.

zipy124 2 hours ago | parent | prev | next [-]

The problem becomes if you are a service like Youtube, where you already have capture almost the entire customer base.

svara 2 hours ago | parent | prev | next [-]

Yes, it's the lump of labor fallacy.

Doesn't exclude the possibility of short term distribution, though.

threatofrain 5 hours ago | parent | prev | next [-]

These are words without weights. At some point the put money into software option will max out. Perhaps what we should all be doing is hiring more lawyers, there's always more legal work to be done. When you don't have weights then you can reason like this.

yoyohello13 5 hours ago | parent [-]

I don’t know what kind of software your used to but software is pretty much universally dog shit these days. I could probably count on one hand the number of programs that I actually like using. There is an astronomical room for improvement. I don’t think we are hitting diminishing returns any time soon.

laurentiurad 4 hours ago | parent [-]

I talk about this at length in one of my previous posts here: https://news.ycombinator.com/item?id=39963058. I definitely share your opinion and I think this will be exacerbated by vibe coding and having LoC as the main KPI for engineering teams.

throwaw12 5 hours ago | parent | prev | next [-]

> Just do more work, get more done

That's one of the reasons why I am terrified, because it can lead to burn out, and I personally don't like to babysit bunch of agents, because the output doesn't feel "mine", when its not "mine" I don't feel ownership.

And I am deliberately hitting the brake from time to time not to increase expectations, because I feel like driving someone else's car while not understanding fully how they tuned their car (even though I did those tunings by prompting)

ako 3 hours ago | parent | next [-]

I'm currently a product manager (was a software engineer and technical architect before), so i already lost the feeling of ownership of code. But just like when you're doing product management with a team of software engineers, testers, and UXers, with AI you can still feel ownership of the feature or capability you're shipping. So from my perspective, nothing changes regarding ownership.

discreteevent 2 hours ago | parent [-]

> So from my perspective, nothing changes regarding ownership.

The engineer who worked with you took ownership of the code! Have you forgotten this?

ako 2 hours ago | parent [-]

No, that’s why I wrote “from my perspective”. I started long ago writing 6502 and 68000 assembly, later c and even later Java. Every step you lose ownership of the underlying layer. This is just another step. “But it’s non deterministic!”, yes so are developers. We need QA regardless who or what write the lines of code.

QuercusMax 5 hours ago | parent | prev [-]

It feels very much like leading a team of junior engineers or even interns who are very fast but have no idea about why we're doing anything. You have to understand the problems you're trying to solve and describe the solutions in a way they can be implemented.

It's not going to be written exactly like you would do it, but that's ok - because you care about the results of the solution and not its precise implementation. At some point you have to make an engineering decision whether to write it yourself for critical bits or allow the agent/junior to get a good enough result.

You're reviewing the code and hand editing anyway, right? You understand the specs even if your agent/junior doesn't, so you can take credit even if you didn't physically write the code. It's the same thing.

throwaw12 5 hours ago | parent [-]

> It feels very much like leading a team of junior engineers or even interns who are very fast but have no idea about why we're doing anything

Yes, yes!

And this is problem for me, because of the pace, my brain muscles are not developing enough compared to when I was doing those things myself.

before, I was changing my mind while implementing the code, because I see more things while typing, and digging deeper, but now, because juniors are doing things they don't offer me a refactoring or improvements while typing the code quickly, because they obey my command instead of having "aha" moment to suggest better ways

MattGaiser 4 hours ago | parent | prev [-]

You would need to expand your capacity to find and define the work. I imagine that would be a major challenge.

jpollock 3 hours ago | parent | prev | next [-]

The last time I tried AI, I tested it with a stopwatch.

The group used feature flags...

    if (a) {
       // new code
    } else {
       // old code
    }

    void testOff() {
       disableFlag(a);
       // test it still works
    }
    
    void testOn() {
        enableFlag(a);
        // test it still works
    }
However, as with any cleanup, it doesn't happen. We have thousands of these things lying around taking up space. I thought "I can give this to the AI, it won't get bored or complain."

I can do one flag in ~3minutes. Code edit, pr prepped and sent.

The AI can do one in 10mins, but I couldn't look away. It kept trying to use find/grep to search through a huge repo to find symbols (instead of the MCP service).

Then it ignored instructions and didn't clean up one or the other test, left unused fields or parameters and generally made a mess.

Finally, I needed to review and fix the results, taking another 3-5 minutes, with no guarantee that it compiled.

At that point, a task that takes me 3 minutes has taken me 15.

Sure, it made code changes, and felt "cool", but it cost the company 5x the cost of not using the AI (before considering the token cost).

Even worse, the CI/CD system couldn't keep up the my individual velocity of cleaning these up, using an automated tool? Yeah, not going to be pleasant.

However, I need to try again, everyone's saying there was a step change in December.

laserlight 2 hours ago | parent | next [-]

I did my own experiment with Claude Code vs Cursor tab completion. The task was to convert an Excel file to a structured format. Nothing fancy at all.

Claude Code took 4 hours, with multiple prompts. At the end, it started to break the previous fixes in favor of new features. The code was spaghetti. There was no way I could fix it myself or steer Claude Code into fixing it the right way. Either it was a dead-end or a dice roll with every prompt.

Then I implemented my own version with Cursor tab completion. It took the same amount of time, 4 hours. The code had a clear object-oriented architecture, with a structure for evolution. Adding a new feature didn't require any prompts at all.

As a result, Claude Code was worse in terms of productivity: the same amount of time, worse quality output, no possibility of (or at best very high cost of) code evolution.

thesamethrowawa 2 hours ago | parent | next [-]

Are you able to share your prompts to Claude Code? I assume not, they are probably not saved - but this genuinely surprised me, it seems like exactly the type of task an LLM would excel at (no pun intended!). What model were you using OOI?

laserlight 2 hours ago | parent [-]

> this genuinely surprised me

Me too. After listening to all the claims about Claude Code's productivity benefits, I was surprised to get the result I got.

I'm not able to share details of my work. I was using Claude Opus 4.5, if I recall correctly.

shinycode 2 hours ago | parent | prev [-]

The exact same prompt ? Everything depends on the prompt and it’s different tools. These days the quality and what’s build around the prompt matters as much as the code. We can’t feed generic query.

embedding-shape 44 minutes ago | parent | prev [-]

What model, what harness and about how long was your prompt to fire off this piece of work? All three matters a lot, but importantly missing from your experience.

mirsadm 33 minutes ago | parent | prev | next [-]

I have an app which is fairly popular. This release cycle I used Claude Code and codex to implement all the changes / features. It definitely let me move much quicker than before.

However now that it's in the beta stage the amount of issues and bugs is insane. I reviewed a lot of the code that went in as well. I suspect the bug fixing stage is going to take longer than the initial implementation. There are so many issues and my mental model of the codebase has severely degraded.

It was an interesting experiment but I don't think I would do it again this way.

maplethorpe 20 minutes ago | parent | next [-]

Rather than trying to fix the bugs yourself, have you tried asking Claude to fix the for you?

mirsadm 15 minutes ago | parent [-]

I have already been doing this. I could keep doing it but I'm not going to. I want to be able to understand my own code because that is what let's me make sound higher level decisions.

truetraveller 31 minutes ago | parent | prev [-]

Thanks for the insight!

tripledry 2 hours ago | parent | prev | next [-]

Something I've been thinking about lately is if there is value in understanding the systems we produce and if we expected to?

If I can just vibe and shrug when someone asks why production is down globally then I'm sure the amount of features I can push out increases, but if I am still expected to understand and fix the systems I generate, I'm not convinced it's actually faster to vibe and then try to understand what's going on rather than thinking and writing.

In my experience the more I delegate to AI, the less I understand the results. The "slowness and thinking" might just be a feature not a bug, at times I feel that AI was simply the final straw that finally gave the nudge to lower standards.

joe_mamba 2 hours ago | parent [-]

>if I can just vibe and shrug when someone asks why production is down globally

You're pretty high up in the development, decision and value-addition chain, if YOU are the responsible go-to person for these questions. AI has no impact on your position.

tripledry an hour ago | parent [-]

Naa, I'm just a programmer. Experience may vary depending on company and country, for me this has been true from tiny startups to global corporations.

Tangential, I don't even know what "responsible" in the corporate world means anymore, it seems to me no one is really responsible for anything. But the one thing that's almost certain is that I will fix the damn thing if I made it go boom.

kodablah 6 hours ago | parent | prev | next [-]

> People who are saying they're not seeing productivity boost, can you please share where is it failing?

At review time.

There are simply too many software industries that can't delegate both authorship _and_ review to non-humans because the maintenance/use of such software, especially in libraries and backwards-compat-concerning environments, cannot justify an "ends justifies the means" approach (yet).

belZaah 4 hours ago | parent | prev | next [-]

I don’t think the objections are not necessarily in terms of lack of productivity although my personal experience is not that of massive productivity increases. The fact that you are producing code much faster is likely just to push the bottleneck somewhere else. Software value cycles are long and complicated. What if you run into an issue in 5 years the LLM fails to diagnose or fix due to complex system interactions? How often would that happen? Would it be feasible to just generate the whole thing anew matching functionality precisely? Are you making the right architecture choices from the perspective of what the preferred modus operandi of an llm is in 5 years? We don’t know. The more experienced folks tend to be conservative as they have experienced how badly things can age. Maybe this time it’ll be different?

apsurd 5 hours ago | parent | prev | next [-]

AI dramatically increases velocity. But is velocity productivity? Productivity relative to which scope: you, the team, the department, the company?

The question is really, velocity _of what_?

I got this from a HN comment. It really hit for me because the default mentality for engineers is to build. The more you build the better. That's not "wrong" but in a business setting it is very much necessary but not sufficient. And so whenever we think about productivity, impact, velocity, whatever measure of output, the real question is _of what_? More code? More product surface area? That was never really the problem. In fact it makes life worse majority of the time.

mattmanser 4 hours ago | parent [-]

The real question is, is it increasing their velocity?

They've already admitted they just 'throw the code away and start again'.

I think we've got another victim of perceived productivity gains vs actual productivity drop.

People sitting around watching Claude churn out poor code at a slower rate than if they just wrote it themselves.

Don't get me wrong, great for getting you started or writing a little prototype.

But the code is bad, riddled with subtle bugs and if you're not rewriting it and shoving large amounts of AI code into your codebase, good luck in 6-12 months time.

wasmainiac 5 hours ago | parent | prev | next [-]

Because its failure rate is too high. Beyond boilerplate code and CRUD apps, if I let AI run freely on the projects I maintain, I spend more time fixing its changes than if I just did it myself. It hallucinates functionally, it designs itself into corners, it does not follow my instructions, it writes too much code for simple features.

It’s fine at replacing what stack overflow did nearly a decade ago, but that isn’t really an improvement from my baseline.

leptons 4 hours ago | parent [-]

That's my experience too. It's okay at a few things that save me some typing, but it isn't really going to do the hard work for me. I also still need to spend significant amounts of time figuring out what it did wrong and correcting it. And that's frustrating. I don't make those mistakes, and I really dislike being led down bad paths. If "code smells" are bad, then "AI" is a rotting corpse.

msvana 4 hours ago | parent | prev | next [-]

I work as an ML engineer/researcher. When I implement a change in an experiment it usually takes at least an hour to get the results. I can use this time to implement a different experiment. Doesn't matter if I do it by hand or if I let an agent do it for me, I have enough time. Code isn't the bottleneck.

I also heard an opinion that since writing code is cheap, people implement things that have no economic value without really thinking it through.

apsurd 4 hours ago | parent [-]

+1 on the economic value line. Not everything needs to be about money but if you get paid to ship code it's about money. And now we have coworkers shipping insane amounts of "features" because it's all free to ship and being an engineer, it ends there.

Only it doesn't, there's product positioning, UX, information architecture, onboarding and training, support, QA, change management, analytics, reporting… sigh

embedding-shape 42 minutes ago | parent [-]

> but if you get paid to ship code it's about money.

Tip to budding software engineers: try to not work in these sort of places, as they're about "looking busy" rather than engineering software, where the latter is where real long-lasting things are built, and the former is where startup founders spend most their money.

The last paragraph is where the tricky and valuable parts are, and also where AI isn't super helpful today, and where you as a human can actually help out a lot if you're just 10% better than the rest of the "engineers" who only want to ship as fast as possible.

dumfries 2 hours ago | parent | prev | next [-]

"it works" is a very low standard when it comes to software engineering. Why are we not holding AI generated code to the same standards as we hold our peers during code reviews?

I have never heard anyone say "it works" as a positive thing when reviewing code..

Yes, there is a productivity boost but you can't tell me there is no decrease in quality

iugtmkbdfil834 3 hours ago | parent | prev | next [-]

I don't want to generalize from my specific situation too much, but I want to offer an anecdote from my neck of the woods. On my personal sub, I agree it is kinda crazy the kind of projects I can get into now with little to no prior knowledge.

On the other hand, our corporate AI is.. not great atm. It was briefly kinda decent and then suddenly it kinda degraded. Worst case is, no one is communicating with us so we don't know what was changed. It is possible companies are already trying to 'optimize'.

I know it is not exactly what you are asking. You are saying capability is there, but I am personally starting to see a crack in corporate willingness to spend.

aurareturn 6 hours ago | parent | prev | next [-]

  People who are saying they're not seeing productivity boost, can you please share where is it failing?
Believe it or not, I still know many devs who do not use any agents. They're still using free ChatGPT copy and paste.

I'm going to guess that many people on HN are also on the "free ChatGPT isn't that good at programming" train.

throwaw12 6 hours ago | parent | next [-]

> They're still using free ChatGPT copy and paste

Probably that's the reason why some people are sure their job is still safe.

Nature of job is changing rapidly

aurareturn 6 hours ago | parent [-]

I totally get tech CEOs who threaten to fire their devs who do not embrace AI tools.

I'm not a tech CEO but people who are anti-LLM for programming have no place on my team.

salawat 5 hours ago | parent [-]

And you are paying for their tokens on top of their salary, right? Right?

mikkupikku 30 minutes ago | parent | next [-]

"Bring your own tools" is not exactly novel in the workplace. Maybe so for office workers, but not more generally. Anyway, these particular tools are cheap enough that it hardly even matters who is expected to pay for them.

The $20 a month tier in particular is a trivial expense, on par with businesses that expect their workers to wear steel toed shoes. Some may give workers a little stipend to buy those boots, some not. Either way, it doesn't really matter.

aurareturn 5 hours ago | parent | prev | next [-]

You can do a lot with just $20 Codex CLI subscription. Tokens are cheap compared to the $20k we're paying for a dev each month.

ido 5 hours ago | parent | next [-]

Even the $200 claude max monthly subscription is peanuts compared to salary cost.

monksy 4 hours ago | parent [-]

Tell that to the company that I was just at that cut Intelij licenses as cost cutting measures.

aurareturn 4 hours ago | parent [-]

If they really want to cut cost, fire the worst dev on the team and use that money to give everyone a Codex subscription.

KronisLV 3 hours ago | parent [-]

Or better yet, fire the managers or bean counters that think decreasing everyone’s productivity is good for long term savings.

I’m reminded of https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...

mikkupikku 27 minutes ago | parent [-]

Fire the middle management, HR, and etc that have been enthusiastically using AI to do their jobs for the past two or three years already. 90% of them can be replaced by an agent with access to an email account.

hdgvhicv 4 hours ago | parent | prev | next [-]

Amazes me that people pay 20k a month for a dev rather than paying 2k a month for one in Poland or 1k a month for one from India

There’s obviously a benefit of paying higher rates for US programmers, but does that benefit change when llms are thrown into the mix

forgotlastlogin 23 minutes ago | parent [-]

2k in Poland you say...

baq 5 hours ago | parent | prev [-]

Exactly, the $20 codex is so good value it’s irresponsible to not give it to everyone. Claude code $20 is otoh pointless, the limits are good enough for 10 mins of work twice per business day.

onion2k 5 hours ago | parent | prev [-]

Every business that's taking AI seriously is giving their team enterprise accounts to AI services. Otherwise you have no control over where your code, data, company info, etc is going.

Someone deciding to drop a spreadsheet of customer data into their personal AI account to increase their productivity would be catastrophic for business, so you need rules. And rules means paying for enterprise AI tooling.

dataflow 6 hours ago | parent | prev | next [-]

Which one would you recommend as the best right now? Claude Code?

salawat 5 hours ago | parent | prev [-]

Not everyone has the capability to rent out data center tier hardware to just do their job. These things require so much damn compute you need some serious heft to actually daisy chain enough stages either in parallel or deep to get enough tokens/sec for the experience to go ham. If you're making bags o' coke money, and deciding to fund Altman's, Zuckernut's or Amazon/Google's/Microsoft's datacenter build out, that's on you. Rest of us are just trying to get by on bits and bobs we've kept limping over the years. If opencode is anything to judge the vibecoded scene by, I'm fairly sure at some point the vibe crowd will learn the lesson of isolating the most expensive computation ever from the hot loop, then maybe find one day all they needed was maybe something to let the model build a context, and a text editor.

Til then wtf_are_these_abstractions.jpg

kranke155 3 hours ago | parent | prev | next [-]

I work in commercials.

We can now make 1$ million dollar commercials with 100,000$ or less. So a 90% reduction in costs - if we use AI.

The issue is they don’t look great. AI isn’t that great at some key details.

But the agencies are really trying to push for it.

They think this is the way back to the big flashy commercials of old. Budgets are lower than ever, and shrinking.

Big issue here is really the misunderstanding of cause - budgets are lower, because advertising has changed in general (TV is less and less important ) and a lot of studies showed that advertising is actually not all that effective.

So they are grabbing onto a lifeboat. But I’m worried there’s no land.

I’ve planned my exit.

uxcolumbo 3 hours ago | parent [-]

Advertisement is not that effective in general or just for certain channels, i.e. TV?

Also what are you existing to?

kranke155 2 hours ago | parent | next [-]

So my understanding - from a friend at WPP who told me the same and from a freakonomics episode - is that advertising was wildly oversold before digital.

When the metrics arrived with digital, they saw that advertising, in some ways, was just not as effective as they’d hoped. In some ways the ROI wasn’t there. Seth Godin agrees. He says that advertising in the digital era could be as simple as just having a good product. I think this is Tesla’s position on it - make the best product and the internet takes care of it.

Legacy companies have kept large ad budgets but those are diminishing. From what I spoke with my friend at WPP, he said their data science team showed that outside of a new product or a product that is not recognised by consumers, the actual outcomes from ads are marginal or incremental. Thats what he told me. If your product is already known to consumers, the ROI is questionable.

mikkupikku 35 minutes ago | parent [-]

Advertising's foremost job is to sell the premise of advertising to business management. Selling the business's product is always secondary to that.

kranke155 2 hours ago | parent | prev [-]

My exit is storytelling. I think that’s the only thing that will remain. I suspect humans will still want to hear stories about and from other humans.

There’s something about AIs that feels wrong for storytelling. I just don’t think people will want AIs to tell them stories. And if they do… Well, I believe in human storytelling.

staticassertion 3 hours ago | parent | prev | next [-]

When it comes to novel work, LLMs become "fast typers" for me and little more. They accelerate testing phases but that's it. The bar for novelty isn't very high either - "make this specific system scale in a way that others won't" isn't a thing an LLM can ever do on its own, though it can be an aid.

LLMs also are quite bad for security. They can find simple bugs, but they don't find the really interesting ones that leverage "gap between mental model and implementation" or "combination of features and bugs" etc, which is where most of the interesting security work is imo.

asadm 3 hours ago | parent | next [-]

I think your analysis is a bit outdated these days or you may be holding it wrong.

I am doing novel work with codex but it does need some prompting ie. exploring possibilities from current codebase, adding papers to prompt etc.

For security, I think I generally start a new thread before committing to review from security pov.

staticassertion 3 hours ago | parent [-]

You can do novel work with an LLM. You can. The LLM can't. It can be an aid - exploring papers, gathering information, helping to validate, etc. It can't do the actual novel part, fundamentally it is limited to what it is trained on.

If you are relying on the LLM and context, then unless your context is a secret your competitor is only ever one prompt behind you. If you're willing to pursue true novelty, you need a human and you can leap beyond your competition.

bdangubic 25 minutes ago | parent [-]

of course you need a human but do not need nearly as many humans as there are currently in the labor force

truetraveller 30 minutes ago | parent | prev [-]

This is basically my take as well!

oytis 3 hours ago | parent | prev | next [-]

> I have stopped writing code, occasionally I jump into the changes proposed by LLM and make manual edits if it is feasible, otherwise I revert changes and ask it to generate again but based on my learnings from the past rejected output

Isn't it a very inefficient way to learn things? Like, normally, you would learn how things work and then write the code, refining your knowledge while you are writing. Now you don't learn anything in advance, and only do so reluctantly when things break? In the end there is a codebase that no one knows how it works.

throwaw12 2 hours ago | parent | next [-]

> Isn't it a very inefficient way to learn things?

It is. But there are 2 things:

1. Do I want to learn that? (if I am coming back to this topic again in 5 months, knowledge accumulates, but there is a temptation to finish the thing quickly, because it is so boring to swim in huge legacy codebase)

2. How long it takes to grasp it and implement the solution? If I can complete it with AI in 2 days vs on my own in 2 weeks, I probably do not want to spend too much time on this thing

as I mentioned in other comments, this is exactly makes me worried about future of the work I will be doing, because there is no attachment to the product in my brain, no mental models being built, no muscles trained, it feels someone else's "work", because it explores the code, it writes the code. I just judge it when I get a task

oytis 2 hours ago | parent [-]

I don't know where it goes, but it sounds pretty dumb for the companies involved too. Tech companies are in the business of nurturing teams knowledgeable in things so they can build something that gives them an advantage over competition. If there is no knowledge being built, there is no advantage and no tech business.

hobofan 2 hours ago | parent [-]

> Tech companies are in the business of nurturing teams knowledgeable in things

It pains the anti-capitalist fibers in my body to say this, but no they are not. At the maximum the value is in organizational knowledge and existing assets (= source code, documentation), so that people with the least knowledge possible can make changes. In software companies in general, technical excellence and knowledge is not strongly correlated with economic success as long as you clear a certain bar (that's not that high). In comparison, in hardware/engineering companies, that's a lot more correlated.

In the concrete example of a legacy codebase we have here, there is even less value in trying to build up knowledge in the company, as it has already been decided that the system is to be discarded anyways.

hobofan 3 hours ago | parent | prev [-]

> you would learn how things work and then write the code

In a legacy codebase this may require learning a lot of things about how things work just to make small changes, which may be much less efficient.

oytis 2 hours ago | parent [-]

I might still be naive about the industry, but if you don't know how the legacy codebase works, you might either delegate the change to someone else in the company who does, or, if there is no one left, use this opportunity to become the person who knows at least something about it.

pinkmuffinere 5 hours ago | parent | prev | next [-]

I asked opus 4.6 how to administer an A/B test when data is sparse. My options are to look at conversion rate, look at revenue per customer, or something else. I will get about 10-20k samples, less than that will add to cart, less than that will begin checkout, and even less than that will convert. Opus says I should look at revenue per customers. I don't know the right answer, but I know it is not to look at revenue per customers -- that will have high variance due to outlier customers who put in a large order. To be fair, I do use opus frequently, and it often gives good enough answers. But you do have to be suspicious of its responses for important decisions.

Edit: Ha, and the report claims it's relatively good at business and finance...

Edit 2: After discussion in this thread, I went back to opus and asked it to link to articles about how to handle non-normally distributed data, and it actually did link to some useful articles, and an online calculator that I believe works for my data. So I'll eat some humble pie and say my initial take was at least partially wrong here. At the same time, it was important to know the correct question to ask, and honestly if it wasn't for this thread I'm not sure I would have gotten there.

onion2k 5 hours ago | parent [-]

A/B tests are a statistical tool, and outliers will mess with any statistical measure. If your data is especially prone to that you should be using something that accounts for them, and your prompt to Opus should tell it to account for that.

A good way to use AI is to treat it like a brilliant junior. It knows a lot about how things work in general but very little about your specific domain. If your data has a particular shape (e.g lots of orders with a few large orders as outliers) you have to tell it that to improve the results you get back.

pinkmuffinere 5 hours ago | parent [-]

I did tell it that I expect to see something like a power-law distribution in order value, so I think I pretty much followed your instructions here. Btw, if you do know the right thing to do in my scenario, I'd love to figure it out. This is not my area of expertise, and just figuring it out through articles so far.

Karrot_Kream 4 hours ago | parent [-]

I recommend reading Wikipedia and talking to LLMs to get this one. Order values do follow power-law distributions (you're probably looking for an exponential or a Zipf distribution.) You want to ask how to perform a statistical test using these distributions. I'm a fan of Bayesian techniques here, but it's up to you if you want to use a frequentist approach. If you can follow some basic calculus you can follow the math for constructing these statistical tests, if not some searching will help you find the formulas you need.

pinkmuffinere 4 hours ago | parent [-]

Thanks for the suggestions! I didn't want to do the math myself, but I did take your suggestion and found some articles discussing ways to make it work even with a non-normal distribution:

- https://cxl.com/blog/outliers/

- https://www.blastx.com/insights/the-best-revenue-significanc...

- (online tool to calculate significance) https://www.blastx.com/rpv-calculator

I'm not checking their math, but the articles make sense to me, and I trust they did implement it correctly. In the end the LLM did get me to the correct answer by suggesting the articles, so I guess I should eat some humble pie and say it _did_ help me. At the same time, if I didn't have the intuition that using rpv as-is in a t-test would be noisy, and the suggestions from this comment thread, I think I could have gone down the wrong path. So I'm not sure what my conclusion is -- maybe something like LLMs are helpful once you ask the right question.

Karrot_Kream 3 hours ago | parent [-]

One heuristic I like to use when thinking about this question (and I honestly wish the answer space here were less emotionally charged, so we could all learn from each other) is that: LLMs need a human to understand the shape of the solution to check the LLM's work. In fields that I have confirmed expertise in, I can easily nudge and steer the LLM and only skim its output quickly to know if it's right or wrong. In fields I don't, I first ask the LLM for resources (papers, textbooks, articles, etc) and familiarize myself with some initial literature first. I then work with the LLMs slowly to make a solution. I've found that to work well so far.

(I also just love statistics and think it's some of the most applicable math to everyday life in everything from bus arrival times to road traffic to order values to financial markets.)

sivanmz 4 hours ago | parent | prev | next [-]

It’s been my experience as of recently. I point it at an issue tracker and ask it to investigate, write a test to reproduce the problem and plan a fix together. There’s lots of hand holding from me but it saves me a lot of work and I’ve been surprised by its comfort with legacy code bases. For now I feel empowered, and I’m actually working more intensively, but I was wondering to myself if I’m going run out of work this year. Interestingly, our metrics show that output is slowed by increased workload on reviewers.

boxedemp 6 hours ago | parent | prev | next [-]

I'm with you. The project I'm working on is moving at phenomenal velocity. I'm basically spending my time writing specs and performing code reviews. As long as my code review comments and design docs are clear I get a secure, scalable, and resilient system.

Tests were always important, but now they are the gatekeepers to velocity.

dataflow 6 hours ago | parent | prev | next [-]

I feel like this might be heavily dependent on both your task and the AI you're using? What language do you code in and what AI do you use? And are your tasks pretty typical/boilerplate-y with prior art to go off of, or novel/at-the-edge-of-tech?

RandomLensman 5 hours ago | parent | prev | next [-]

Outside of coding/non-physical areas, the impact can be quite muted. I haven't seen much impact on surgical procedures, for example (but maybe others have?).

lm28469 2 hours ago | parent | prev | next [-]

Meanwhile gemini tells me my go code doesn't compile (it does)

Gaslight me by telling me I must be a time traveler because I use go 1.26 but the latest version actually is 1.24

And tell me I can't use wg.Go() because this function does not exist (it does)

fulafel 6 hours ago | parent | prev | next [-]

A terminology tangent because it's an econ publication: Notice that the article doesn't talk about productivity.

Productivity is a term of art in economics and means you generate more units of output (for example per person, per input, per wages paid) but doesn't take quality or otherwise desireability into account. It's best suited for commodities and industrial outputs (and maybe slop?).

drekipus 2 hours ago | parent | prev | next [-]

> my job is easier now, I do less. > I am terrified about what's coming.

God I hope I never ever have to work with you

KronisLV 5 hours ago | parent | prev | next [-]

I’m currently working across like 5 projects (was 4 last week but you know how it is). I now do more in days than others might in a week.

Yesterday a colleague didn’t quite manage to implement a loading container with a Vue directive instead of DOM hacks, it was easier for me to just throw AI at the problem and produced a working and tested solution and developer docs than to have a similarly long meeting and have them iterate for hours.

Then I got back to training a CNN to recognize crops from space (ploughing and mowing will need to be estimated alongside inference, since no markers in training data but can look at BSI changes for example), deployed a new version of an Ollama/OpenAI/Anthropic proxy that can work with AWS Bedrock and updated the docs site instructions, deployed a new app that will have a standup bot and on-demand AI code review (LiteLLM and Django) and am working on codegen to migrate some Oracle forms that have been stagnating otherwise.

It’s not funny how overworked I am and sure I still have to babysit parallel Claude Code sessions and sometimes test things manually and write out changes, but this is a completely different work compared to two or three years ago.

Maybe the problem spaces I’m dealing with are nothing novel, but I assume most devs are like that - and I’d be surprised at people’s productivity not increasing.

When people nag in meetings about needing to change something in a codebase, or not knowing how to implement something and its value add, I’ll often have something working shortly after the meeting is over (due to starting during it).

Instead of sending adding Vitest to the backlog graveyard, I had it integrated and running in one or two evenings with about 1200 tests (and fixed some bugs). Instead of talking about hypothetical Oxlint and Oxfmt performance improvements, I had both benchmarked against ESLint and Prettier within the hour.

Same for making server config changes with Ansible that I previously didn’t due to additional friction - it is mostly just gone (as long as I allow some free time planned in case things vet fucked up and I need to fix them).

Edit: oh and in my free time I built a Whisper + VLM + LLM pipeline based on OpenVINO so that I can feed it hours long stream VODs and get an EDL cut to desired length that I can then import in DaVinci Resolve and work on video editing after the first basic editing prepass is done (also PyScene detect and some audio alignment to prevent bad cuts). And then I integrated it with subscription Claude Code, not just LiteLLM and cloud providers with per-token costs for the actual cuts making part (scene description and audio transcriptions stay local since those don't need a complex LLM, but can use cloud for cuts).

Oh and I'm moving from my Contabo VPSes to running stuff inside of a Hetzner Server Auction server that now has Proxmox and VMs in that, except this time around I'm moving over to Ansible for managing it instead of manual scripts as well, and also I'm migrating over from Docker Swarm to regular Docker Compose + Tailscale networks (maybe Headscale later) and also using more upstream containers where needed instead of trying to build all of mine myself, since storage isn't a problem and consistency isn't that important. At the same time I also migrated from Drone CI to Woodpecker CI and from Nexus to Gitea Packages, since I'm already using Gitea and since Nexus is a maintenance burden.

If this becomes the new “normal” in regards to everyone’s productivity though, there will be an insane amount of burnout and devaluation of work.

Karrot_Kream 4 hours ago | parent | next [-]

> When people nag in meetings about needing to change something in a codebase, or not knowing how to implement something and its value add, I’ll often have something working shortly after the meeting is over (due to starting during it).

We've started building harnesses to allow people who don't understand code to create PRs to implement their little nags. We rely on an engineer to review, merge, and steward the change but it means that non-eng folks do not rely on us as a gate. (We're a startup and can't really afford "teams" to do this hand-holding and triage for us.)

As you say we're all a bit overworked and burned out. I've been context switching so much that on days when I'm very productive I've started just getting headaches. I'm achieving a lot more than before but holding the various threads in my head and context switching is just a lot.

leptons 4 hours ago | parent | prev [-]

>I now do more in days than others might in a week.

I've always done more in days than others might in a week. YMMV.

therealdrag0 6 hours ago | parent | prev | next [-]

I can only explain it by people not having used Agentic tools and or only having tried it 9 months ago for a day before giving up or having such strict coding style preferences they burn time adjusting generated code to their preferences and blaming the AI even though they’re non-functional changes and they didn’t bother to encode them into rules.

The productivity gains are blatantly obvious at this point. Even in large distributed code bases. From jr to senior engineer.

MattGaiser 5 hours ago | parent [-]

I can see someone who is very particular about their way being the right way having issues with it. I’m very much the kind of person who believes that if I can’t write a failing test, I don’t have a very serious case. A lot of devs aren’t like that.

truetraveller 6 hours ago | parent | prev [-]

You were probably deficient in RESEARCH skills before. No offense to you, since I was also like this once. LLMs research and put the results on the plate. Yes, for people who were deficient in research skills, I can see 2-3x improvements.

Note1: I have "expert" level research skills. But LLMs still help me in research, but the boost is probably 1.2x max. But

Note2: By research, I mean googling, github search, forum search, etc. And quickly testing using jsfiddle/codepen, etc.

throwaw12 6 hours ago | parent | next [-]

no worries, I do not get offended quickly.

But I also think you are overestimating your RESEARCH skills, even if you are very good at research, I am sure you can't read 25 files in parallel, summarize them (even if its missing some details) in 1 minute and then come up with somewhat working solution in the next 2 minutes.

I am pretty sure, humans can't comprehend reading 25 code files with each having at least 400 lines of non-boilerplate code in 2 minutes. LLM can do it and its very very good at summarizing.

I can even steer its summarizing skills by prompting where to focus on when its reading files (because now I can iterate 2-3 times for each RESEARCH task and improve my next attempt based on shortcomings in the previous attempt)

truetraveller 40 minutes ago | parent [-]

OK, it's not just RESEARCH, but "RESEARCHability" of the source content [in this case code], and also critical analysis ability [not saying you are deficient in anything, speaking in general terms].

In this example, if the 25 files are organized nicely, and I had I nice IDE that listed class/namespace members of each file neatly, I might take 30 minutes to understand the overall structure.

Morever, If I critically analyzed this, I would ask "how many times does this event of summarizing 25 files happen"? I mean, are we changing codebases every day? No, it's a one time cost. Moreover, manually going through will provide insight not returned by LLM.

Obviously, every case is different, and perhaps you do need to RESEARCH new codebases often, I dunno!

siva7 3 hours ago | parent | prev | next [-]

Ok Mr. Expert Level Researcher, go back and read the comment of parent again to find out that it has nothing to do with deficiency in research skills.

truetraveller 38 minutes ago | parent [-]

Lol! Didn't mean any harm, just giving my 2cents!

throwaw12 5 hours ago | parent | prev [-]

please don't change your comment constantly (or at maybe add UPDATE 1/2/3), because you had different words before, like you were saying something in this fashion:

* you probably lack good RESEARCH skills

* I can see at most 1.25x improvements - now it is 2-3x

By updating your comment you are making my reply irrelevant to your past response

truetraveller an hour ago | parent [-]

Apologies, I changed this within a ~10 minute period. Never knew you would actually see it so fast.

bandrami 11 hours ago | parent | prev | next [-]

I don't write code for a living but I administer and maintain it.

Every time I say this people get really angry, but: so far AI has had almost no impact on my job. Neither my dev team nor my vendors are getting me software faster than they were two years ago. Docker had a bigger impact on the pipeline to me than AI has.

Maybe this will change, but until it does I'm mostly watching bemusedly.

kdheiwns 8 hours ago | parent | next [-]

Yep. All AI has done for me is give me the power of how good search engines were 10+ years ago, where I could search for something and find actually relevant and helpful info quickly.

I've seen lots of people say AI can basically code a project for them. Maybe it can, but that seems to heavily depend on the field. Other than boilerplate code or very generic projects, it's a step above useless imo when it comes to gamedev. It's about as useful as a guy who read some documentation for an engine a couple years ago and kind of remembers it but not quite and makes lots of mistakes. The best it can do is point me in the general direction I need to go, but it'll hallucinate basic functions and mess up any sort of logic.

kranner 7 hours ago | parent | next [-]

My experience is the same. There are modest gains compensating for lack of good documentation and the like, but the human bottlenecks in the process aren't useless bureaucracy. Whether or not a feature or a particular UX implementation of it makes sense, these things can't be skipped, sped up or handed off to any AI.

freddref 5 hours ago | parent [-]

What are these bottlenecks specifically that you feel are essential?

Am trying to compare this to reports that people are not reviewing code any more.

kranner 2 hours ago | parent [-]

When features and their exact UI implementations are being developed, feedback and discussions around those things.

bee_rider 8 hours ago | parent | prev | next [-]

Thinking of it, I haven’t seen as many “copy paste from StackOverflow” memes lately. Maybe LLMs have given people the ability to

1) Do that inside their IDEs, which is less funny

2) Generate blog post about it instead of memes

throwaw12 6 hours ago | parent | prev [-]

> how good search engines were 10+ years ago

For me this is a huge boost in productivity. If I remember how I was working in the past (example of Google integration):

Before:

    * go through docs to understand how to start (quick start) and things to know
    * start boilerplate (e.g. install the scripts/libs)
    * figure out configs to enable in GCP console
    * integrate basic API and test
    * of course it fails, because its Google API, so difficult to work with
    * along the way figure out why Python lib is failing to install, oh version mismatch, ohh gcc not installed, ohh libffmpeg is required,...
    * somehow copy paste and integrate first basic API
    * prepare for production, ohhh production requires different type of Auth flow
    * deploy, redeploy, fix, deploy, redeploy
    * 3 days later -> finally hello world is working
Now:

    * Hey my LLM buddy, I want to integrate Google API, where do I start, come up with a plan
    * Enable things which requires manual intervention
    * In the meantime LLM integrates the code, install lib, asks me to approve installation of libpg, libffmpeg,....
    * test, if fails, feed the error back to LLM + prompt to fix it
    * deploy
noosphr 6 hours ago | parent [-]

This is what you'd use a search engine for 10 years ago.

The docs used to be good enough that there would be an example which did exactly what you needed more often than the llm gets it right today.

httpz 8 hours ago | parent | prev | next [-]

This is a classic case of Productivity Paradox when personal computers were first introduced into workplaces in the 80s.

A famous economist once said, "You can see the computer age everywhere but in the productivity statistics."

There are many reasons for the lag in productivity gain but it certainly will come.

https://en.wikipedia.org/wiki/Productivity_paradox

bandrami 7 hours ago | parent | next [-]

That's only certain if investments in tech infrastructure always led to productivity increases. But sometimes they just don't. Lots of firms spent a lot of money on blockchain five years ago, for instance, and that money is just gone now.

20k 7 hours ago | parent | next [-]

I find it odd the universal assumption that AI is going to be good for productivity

The loss of skills, complete loss of visibility and experience with the codebase, and the complete lack of software architecture design, seems like a massive killer in the long term

I have a feeling that we're going to see productivity with AI drop through the floor

hombre_fatal 6 hours ago | parent | next [-]

I'd claim the opposite. Better models design better software, and quickly better software than what most software developers were writing.

Just yesterday I asked Opus 4.6 what I could do to make an old macOS AppKit project more testable, too lazy to even encumber the question with my own preferences like I usually do, and it pitched a refactor into Elm architecture. And then it did the refactor while I took a piss.

The idea that AI writes bad software or can't improve existing software in substantial ways is really outdated. Just consider how most human-written software is untested despite everyone agreeing testing is a good idea simply because test-friendly arch takes a lot of thought and test maintenance slow you down. AI will do all of that, just mention something about 'testability' in AGENTS.md.

bandrami 6 hours ago | parent | next [-]

OK so this comes back to the question I started this subthread with: where is this better software? Why isn't someone selling it to me? I've been told for a year it's coming any day now (though invariably the next month I'm told last month's tools were in fact crap and useless compared to the new generation so I just have to wait for this round to kick in) and at some point I do have to actually see it if you expect me to believe it's real.

hombre_fatal 6 hours ago | parent | next [-]

How would you know if all software written in the last six months shipped X% faster and was Y% better?

Why would you think you have your finger on the pulse of general software trends like that when you use the same, what, dozen apps every week?

Just looking at my own productivity, as mere sideprojects this month, I've shipped my own terminal app (replaced iTerm2), btrfs+luks NAS system manager, overhauled my macOS gamepad mapper for the app store, and more. All fully tested and really polished, yet I didn't write any code by hand. I would have done none of that this month without AI.

You'd need some real empirics to pick up productivity stories like mine across the software world, not vibes.

bandrami 6 hours ago | parent | next [-]

Right, I'm sympathetic to the idea that LLMs facilitate the creation of software that people previously weren't willing to pay for, but then kind of by definition that's not going to have a big topline economic impact.

Tanjreeve 6 hours ago | parent | prev [-]

It's on the people pushing AI as the panacea that has changed things to show workings. Not someone saying "I've not seen evidence of it". Otherwise it's "vibes" as you put it.

eucyclos 5 hours ago | parent | prev [-]

Here's an example: https://eudaimonia-project.netlify.app/

I'm happy to sell it to you, though it is also free. I guided Claude to write this in three weeks, after never having written a line of JavaScript or set up a server before. I'm sure a better JavaScript programmer than I could do this in three weeks, but there's no way I could. I just had a cool idea for making advertising a force for good, and now I have a working version in beta.

I'd say it is better software, but better is doing a lot of heavy lifting there. Claude's execution is average and always will be, that's a function of being a prediction engine. But I genuinely think the idea is better than how advertising works today, and this product would not exist at all if I had to write it myself. And I'm someone who has written code before, enough that I was probably a somewhat early adopter to this whole thing. Multiply that by all the people whose ideas get to live now, and I'm sure some ideas will prove to be better even with average execution. Like an llm, that's a function of statistics.

bandrami 5 hours ago | parent [-]

In glad you made something with it you wanted to make, and as a fan of Aristotle I'm always happy to see the word eudaimonia out there. Best of luck. That said I don't understand what this does or why I would want the tokens it mentions.

eucyclos 5 hours ago | parent [-]

Yeah, I gotta make a video walkthrough. Its basically a goal tracker combined with an ad filter - write what you want out of life and block ads, it replaces them with ads that actually align with your long term goals instead of distracting from them. The tokens let you add ads to the network, though you also get some for using the goal tracker.

bandrami 4 hours ago | parent [-]

Though this does suggest one possible answer to me: the new software is largely web applications, and the web is just a space I don't spend much time anymore other than a few retro sites like this

20k 3 hours ago | parent | prev [-]

And now you have no idea how any of the code works

AI writes bad software by virtue of it being written by the AI, not you. No actual team member understands what's going on with the code. You can't interrogate the AI for its decision making. It doesn't understand the architecture its built. There's nobody you can ask about why anything is built the way it is - it just exists

Its interesting watching people forget that the #1 most important thing is developers who understand a codebase thoroughly. Institutional knowledge is absolutely key to maintaining a codebase, and making good decisions in the long term

Its always been possible to trade long term productivity for short term gains like this. But now you simply have no idea what's going on in your code, which is an absolute nightmare for long term productivity

mirsadm 22 minutes ago | parent [-]

My own observation is that the initial boost to productivity results in massive crippling technical debt.

nikkwong 7 hours ago | parent | prev [-]

Having the productivity "drop through the floor" is a bit hyperbolic, no? Humans are still reviewing the PRs before code merge at least at my company (for the most part, for now).

bandrami 7 hours ago | parent [-]

I don't know that it's likely but it's certainly a plausible outcome. If tooling keeps getting built for this and the financial music stops it's going to take a while for everybody to get back up to speed

Remember this famously happened before, in the 1970s

Tanjreeve 6 hours ago | parent [-]

There's an actual working product now, albeit one which is currently loss leading. In software world at least there is definitely enough value for it to be used even if it's just better search engine. I'm not sure why it would disappear if the financial music stops as opposed to being commoditised.

bandrami 6 hours ago | parent [-]

Because there's cheaper ways to get an equally good search engine? But yes I imagine some amount of inference will continue even in an AI Winter 3.0 scenario.

salawat 5 hours ago | parent | prev [-]

Ironically, abstraction bloat eats away any infra gains. We trade more compute to allow people less in tune with the machine to get things done, usually at the cost of the implementation being eh... Suboptimal, shall we say.

bandrami 4 hours ago | parent [-]

I think there's a broad category error where people see that every gain has been an abstraction (true) but conclude from that that every abstraction will be a gain (dubious)

danbolt 3 hours ago | parent | prev | next [-]

My unfounded hunch for the computing bit is that home computers became more and more commonplace in the home as we approached the 21st century.

A Commodore 64 was a cool gadget, but “the family computer” became a device that commoditized the productivity. The opportunity cost of applying a computer to try something new went to near zero.

It might have been harder for someone to improve the productivity of an old factory in Shreveport, Louisiana with a computer than it was for the upstarts at id to make Doom.

kranner 7 hours ago | parent | prev [-]

> There are many reasons for the lag in productivity gain but it certainly will come.

Predictions without a deadline are unfalsifiable.

thewebguyd 11 hours ago | parent | prev | next [-]

Same here, more or less, in the ops world. Yeah, I use AI but I can't honestly say it's massively improved my productivity or drastically changed my job in any way other than the emails I get from the other managers at my work are now clearly written by AI.

I can turn out some scripts a little bit quicker, or find an answer to something a little quicker than googling, but I'm still waiting on others most of the time, the overall company processes haven't improved or gotten more efficient. The same blockers as always still exist.

Like you said, there has been other tech that has changed my job over time more than AI has. The move to the cloud, Docker, Terraform, Ansible, etc. have all had far more of an impact on my job. I see literally zero change in the output of others, both internally and externally.

So either this is a massively overblown bubble, or I'm just missing something.

keeda 8 hours ago | parent | next [-]

> ... but I'm still waiting on others most of the time, the overall company processes haven't improved or gotten more efficient. The same blockers as always still exist.

And that's the key problem, isn't it? I maintain current organizations have the "wrong shape" to fully leverage AI. Imagine instead of the scope of your current ownership, you own everything your team or your whole department owns. Consider what that would do to the meetings and dependencies and processes and tickets and blockers and other bureaucracy, something I call "Conway Overhead."

Now imagine that playing out across multiple roles, i.e. you also take on product and design. Imagine what that would do to your company org chart.

I added a much more detailed comment here: https://news.ycombinator.com/item?id=47270142

applfanboysbgon 8 hours ago | parent | next [-]

> Imagine instead of

> Now imagine

> Imagine what that would do

Imagine if your grandma had wheels! She'd be a bicycle. Now imagine she had an engine. She could be a motorcycle! Unfortunately for grandma, she lives in reality and is not actually a motorcycle, which would be cool as hell. Our imagination can only take us so far.

To more substantively reply to your longer linked comment: your hypothesis is that people spend as little as 10% of time coding and the other 90% of time in meetings, but that if they could code more, they wouldn't need to meet other people because they could do all the work of an entire team themselves[1]. The problem with your hypothesis is that you take for granted that LLMs actually allow people to do the work of an entire team themselves, and that it is merely bureacracy holding them back. There have been absolutely zero indicators that this is true. No productivity studies of individual developers tackling tasks show a 10x speedup; results tend to be anywhere from +20% to minus 20%. We aren't seeing amazing software being built by individual developers using LLMs. There is still only one Fabrice Bellard in the world, even though if your premise could escape the containment zone of imagination anyone should be able to be a Bellard on their own time with the help of LLMs.

[1] Also, this is basically already true without LLMs. It is the reason startups are able to disrupt corporate behemoths. If you have just a small handful of people who spend the majority of their work time writing code (by hand! No LLMs required!), they can build amazing new products that outcompete products funded by trillion-dollar entities. Your observation of more coding = less meetings required in the first place has an element of truth to it, but not because LLMs are related to it in any particular way.

sgc 8 hours ago | parent | next [-]

     >  Imagine if your grandma had wheels! She'd be a bicycle.
I always took this to be a sharp jab saying the entire village is riding your grandma, giving it a very aggressive undertone. It's pretty funny nonetheless.

Too early to say what AI brings to the efficiency table I think. In some major things I do it's a 1000x speed up. In others it is more a different way of approaching a problem than a speed up. In yet others, it is a bit of an impediment. It works best when you learn to quickly recognize patterns and whether it will help. I don't know how people who are raised with ai will navigate and leverage it, which is the real long-term question (just as the difference between pre- and post-smartphone generations is a thing).

keeda 7 hours ago | parent | prev | next [-]

> No productivity studies of individual developers tackling tasks show a 10x speedup; results tend to be anywhere from +20% to minus 20%.

The only study showing a -20% came back and said, "we now think it's +9% - +38%, but we can't prove rigorously because developers don't want to work without AI anymore": https://news.ycombinator.com/item?id=47142078

Even at the time of the original study, most other rigorous studies showed -5% (for legacy projects, obsolete languages) to 30% (more typical greenfield AND brownfield projects) way back in 2024. Today I hear numbers up to 60% from reports like DX.

But this is exactly missing the point. Most of them are still doing things the old way, including the very process of writing code. Which brings me to this point:

> There have been absolutely zero indicators that this is true.

I could tell you my personal experience, or link various comments on HN, or point you to blogs like https://ghuntley.com/real/ (which also talks about the origanizational impedance mismatch for AI), but actual code would be a better data point.

So there are some open-source projects worth looking at, but they are typically dismissed because they look so weird to us. Here's two mostly vibe-coded (as in, minimal code review, apparently) projects that people shredded for having weird code, but is already used by 10s of 1000s of people, up to 11 - 18K stars now. Look at the commit volume and patterns for O(300K) LoC in a couple of months, mostly from one guy and his agent:

https://github.com/steveyegge/beads/graphs/commit-activity

https://github.com/steveyegge/gastown/graphs/commit-activity

It's like nothing we've seen before, almost equal number of LoC additions and deletions, in the 100s of Ks! It's still not clear how this will pan out long term, but the volume of code and apparent utility (based purely on popularity) is undeniable.

applfanboysbgon 6 hours ago | parent [-]

> they are typically dismissed because they look so weird to us.

I dismiss them because Yegge's work (if it can even be called his work, given that he doesn't look at the code) is steaming garbage with zero real-world utility, not "because they look weird". You suggest the apparent utility is undeniable, while saying "based purely on popularity" -- but popularity is in no way a measure of utility. Yegge is a conman who profited hundreds of thousands of dollars shilling a memecoin rugpull tied to these projects. The actual thousands of users are people joining the hypetrain, looking to get in on the promised pyramid scheme of free money where AI will build the next million dollar software for you, if only you have the right combination of .md files to make it work. None of these software are actually materialising, so all the people in this bubble can do is make more AI wrappers that promise to make other AI wrappers that will totally make them money.

I am completely open to being proven wrong by a vibe-coded open source application that is actually useful, but I haven't seen a single one. Literally not even one. I would count literally anything where the end-product is not an AI wrapper itself, which has tens to hundreds of thousands of users, and which was written entirely by agents. One example of that would be great. Just one. There have been a couple of attempts at a web browser, and Claude's C compiler, but neither are actually useful or have any real users; they are just proofs of concept and I have seen nothing that convinces me they are a solid foundation from which you could actually build useful software from, or that models will ever be on a trajectory to make them actually useful.

pishpash 8 hours ago | parent | prev [-]

This isn't the counter you think it is. It's too much to expect existing behemoths to reshape their orgs substantially on a quick enough timeline. The gains will be first seen in new companies and new organizations, and they will be able to stay flat a longer and outcompete the behemoths.

sdf2df 8 hours ago | parent | prev [-]

What a load of fluff lmao. Are you Nadella?

keeda 6 hours ago | parent [-]

Hah! I would say I'm flattered, but I find his style of speaking rather stilted.

linsomniac 7 hours ago | parent | prev | next [-]

You're missing something.

I've been in ops for 30 years, Claude Code has changed how I work. Ops-related scripting seems to be a real sweet spot for the LLMs, especially as they tend to be smaller tools working together. It can convert a few sentences into working code in 15-30 minutes while you do something else. I've given it access to my apache logs Elastic cluster, and it does a great job at analyzing them ("We suspect this user has been compromised, can you find evidence of that?"). It's quite startling, actually, what it's able to do.

thewebguyd 6 hours ago | parent [-]

Yeah, it's useful for scripting, but it's still only marginally faster. It certainly hasn't been "groundbreaking productivity" like it's being sold.

The problem with analyzing logs is determinism. If I ask Claude to look for evidence of compromise, I can't trust the output without also going and verifying myself. It's now an extra step, for what? I still have to go into Elastic and run the actual queries to verify what Claude said. A saved Kibana search is faster, and more importantly, deterministic. I'm not going to leave something like finding evidence of compromise up to an LLM that can, and does, hallucinate especially when you fill the context up with a ton of logs.

An auditor isn't going to buy "But Claude said everything was fine."

Is AI actually finding things your SIEM rules were missing? Because otherwise, I just don't see the value in having a natural language interface for queries I already know how to run, it's less intuitive for me and non deterministic.

It's certainly a useful tool, there's no arguing that. I wouldn't want to go back to working with out it. But, I don't buy that it's already this huge labor market transformation force that's magically 100x everyone's productivity. That part is 100% pure hype, not reality.

bandrami 6 hours ago | parent | next [-]

The tolerance for indeterminacy is I think a generational marker; people ~20 years younger than me just kind of think of all software as indeterminate to begin with (because it's always been ridiculously complicated and event-driven for them), and it makes talking about this difficult.

sebmellen 6 hours ago | parent | next [-]

I shudder to think of how many layers of dependency we will one day sit upon. But when you think about it, aren’t biological systems kind of like this too? Fallible, indeterminable, massive, labyrinthine, and capable of immensely complex and awe inspiring things at the same time…

kiba 5 hours ago | parent | prev [-]

People younger than me are not even adults. I grew up during the dial up era and then the transition to broadband. I don't think software is indeterminate.

linsomniac 6 hours ago | parent | prev [-]

>still only marginally faster.

Is it? A couple days ago I had it build tooling for a one-off task I need to run, it wrote ~800 lines of Python to accomplish this, in <30m. I found it was too slow, so I got it to convert it to run multiple tasks in parallel in another prompt. Would have taken a couple days for me to build from hand, given the number of interruptions I have in the average day. This isn't a one-off, it's happening all the time.

tayo42 6 hours ago | parent | prev | next [-]

Ops hasn't been in the crosshairs of Ai yet.

Imo it's only a matter of time as companies start to figure out how to use ai. Companies don't seem to have real plans yet and everyone is figuring out ai in general out.

Soon though I will think agents start popping up, things like first line response to pages, executing automation

bandrami 5 hours ago | parent [-]

We've had deterministic automation of tier one response for over a decade now. What value would indeterminacy add to that?

tayo42 5 hours ago | parent [-]

To deal with the problems where there is ambiguity in the problem and the approach to solving it. Not everything is a basic decision tree. Humans aren't deterministic either, the way we woukd approach a problem is probably different. Is one of us right or wrong? We're generally just focused on end results.

Maybe 2 years ago Ai was doing random stuff and we got all those funny screenshots of dumb gemini answers. The indeterminism leading to random stuff isn't really an issue any more.

The way it thinks keeps it on track.

bandrami 4 hours ago | parent [-]

Two weeks ago I asked a frontier model to list five mammals without "e" in their name and number four was "otter"

sdf2df 11 hours ago | parent | prev [-]

Youre not missing anything.

Humans are funny. But most cant seem to understand that the tool is a mirage and they are putting false expectations on it. E.g. management of firms cutting back on hiring under the expectation that LLMs will do magic - with many cheering 'this is the worst itll be bro!!".

I just hope more people realise before Anthropic and OAI can IPO. I would wager they are in the process of cleaning up their financials for it.

fnordpiglet 7 hours ago | parent | prev | next [-]

My employer is pretty advanced in its use of these tools for development and it’s absolutely accelerated everything we do to the point we are exhausting roadmaps for six months in a few weeks. However I think very few companies are operating like this yet. It takes time for tools and techniques to make it out and Claude code alone isn’t enough. They are basically planning to let go of most of the product managers and Eng managers, and I expect they’re measuring who is using the AI tools most effectively and everyone else will be let go, likely before years end. Unlike prior iterations I saw at Salesforce this time I am convinced they’re actually going to do it and pull it off. This is the biggest change I’ve seen in my 35 year career, and I have to say I’m pretty excited to be going through it even though the collateral damage will be immense to peoples lives. I plan to retire after this as well, I think this part is sort of interesting but I can see clearly what comes next is not.

p1esk 6 hours ago | parent | next [-]

I’m observing very similar trends at a startup I’m at. Unfortunately I’m not ready to retire yet.

blackcatsec 6 hours ago | parent | prev [-]

Why are you excited for this? They’re not going to give YOU those peoples’ salaries. You will get none of it. In fact, it will drag your salary through the floor because of all the available talent.

Karrot_Kream 2 hours ago | parent [-]

I'm very confused about this. Salary is only one portion of your total compensation. The vast majority of tech companies offer equity in a company. The two ways to increase the FMV of your equity is: increase your equity stake or increase the value of the total equity available. Hitting the same goals with fewer people means your run rate is lower, which increases the value of your equity (the FMV prices in lower COGS for the same revenue.) Also, keeping on staff often means you want to offer them increased equity stakes as an employment package. Letting staff go means more of that available equity pool is available to distribute to remaining employees.

We aren't fungible workers in a low skill industry. And if you find yourself working in a tech company without equity: just don't, leave. Either find a new tech company or do something else altogether.

bandrami 11 hours ago | parent | prev | next [-]

The dev team is committing more than they used to. A lot, in fact, judging from the logs. But it's not showing up as a faster cadence of getting me software to administer. Again, maybe that will change.

whateveracct 8 hours ago | parent | next [-]

I think they feel more productive but aren't actually.

righthand 11 hours ago | parent | prev [-]

In my experience it is now twice the amount of merge requests as a follow-up appears to correct any bugs no one reviewed in the first merge request.

silentkat 9 hours ago | parent [-]

I’m at a big tech company. They proudly stated more productivity measures in commits (already nonsense). 47% more commits, 17% less time per commit. Meaning 128% more time spent coding. Burning us out and acting like the AI slop is “unlocking” productivity.

There’s some neat stuff, don’t get me wrong. But every additional tool so far has started strong but then always falls over. Always.

Right now there’s this “orchestrator” nonsense. Cool in principle, but as someone who made scripts to automate with all the time before it’s not impressive. Spent $200 to automate doing some bug finding and fixing. It found and fixed the easy stuff (still pretty neat), and then “partially verified” it fixed the other stuff.

The “partial verification” was it justifying why it was okay it was broken.

The company has mandated we use this technology. I have an “AI Native” rating. We’re being told to put out at least 28 commits a month. It’s nonsense.

They’re letting me play with an expensive, super-high-level, probabilistic language. So I’m having a lot of fun. But I’m not going to lie, I’m very disappointed. Got this job a year ago. 12 years programming experience. First big tech job. Was hoping to learn a lot. Know my use of data to prioritize work could be better. Was sold on their use of data. I’m sure some teams here use data really well, but I’m just not impressed.

And I’m not even getting into the people gaming the metrics to look good while actually making more work for everyone else.

sdf2df 8 hours ago | parent | next [-]

Lol its gonna take longer than it should for this to play out.

Sunk cost fallacy is very real, for all involved. Especially the model producers and their investors.

Sunk cost fallacy is also real for dev's who are now giving up how they used to work - they've made a sunk investment in learning to use LLMs etc. Hence the 'there's no going back' comments that crop up on here.

As I said in this thread - anyone who can think straight - Im referring to those who adhere to fundamental economic principles - can see what's going on from a mile away.

booleandilemma 6 hours ago | parent | prev [-]

Management is just stupid sometimes. We had a similar metric at my last company and my manager's response was "well how else are we supposed to measure productivity?", and that was supposed to be a legitimate answer.

eucyclos 7 hours ago | parent | prev | next [-]

A tool with a mediocre level of skill in everything looks mediocre when the backdrop is our own area of expertise and game changing when the backdrop is an unfamiliar one. But I suspect the real game changer will be that everyone is suddenly a polymath.

sibeliuss 6 hours ago | parent [-]

This ^ Exactly it. This will be the change.

lovich 9 hours ago | parent | prev | next [-]

> so far AI has had almost no impact on my job.

Are you hiring?

LPisGood 7 hours ago | parent | next [-]

My company has been hiring a ton over the last year or so. Jobs are out there

cute_boi 8 hours ago | parent | prev [-]

My friend used to say that, and he got quietly fired and outsourced because now someone in India can use ChatGPT to produce similar code, lol.

IMO AI will make 70-80% job obsolete for sure.

bandrami 7 hours ago | parent | next [-]

But, as I said above, I don't produce code; I administer it (administrate? whichever it is).

leptons 3 hours ago | parent | prev [-]

>now someone in India can use ChatGPT to produce similar code,

lol, that sounds like a disaster for the codebase.

willmadden 11 hours ago | parent | prev | next [-]

Build a new feature. If you aren't bogged down in bureaucracy it will happen much faster.

YesBox 7 hours ago | parent | next [-]

I dont use LLMs much. When I do, the experience always feels like search 2.0. Information at your fingertips. But you need to know exactly what you're looking for to get exactly what you need. The more complicated the problem, the more fractal / divergent outcomes there are. (Im forming the opinion that this is going to be the real limitations of LLMs).

I recently used copilot.com to help solve a tricky problem for me (which uses GPT 5.1):

   I have an arbitrary width rectangle that needs to be broken into smaller 
   random width rectangles (maintaining depth) within a given min/max range. 
The first solution merged the remainder (if less than min) into the last rectangle created (regardless if it exceeded the max).

So I poked the machine.

The next result used dynamic programming and generated every possible output combination. With a sufficiently large (yet small) rectangle, this is a factorial explosion and stalled the software.

So I poked the machine.

I realized this problem was essentially finding the distinct multisets of numbers that sum to some value. The next result used dynamic programming and only calculated the distinct sets (order is ignored). That way I could choose a random width from the set and then remove that value. (The LLM did not suggest this). However, even this was slow with a large enough rectangle.

So I poked my brain.

I realized I could start off with a greedy solution: Choose a random width within range, subtract from remaining width. Once remaining width is small enough, use dynamic programming. Then I had to handle the edges cases (no sets, when it's okay to break the rules.. etc)

So the LLMs are useful, but this took 2-3 hours IIRC (thinking, implementation, testing in an environment). Pretty sure I would have landed on a solution within the same time frame. Probably greedy with back tracking to force-fit the output.

bandrami 11 hours ago | parent | prev | next [-]

Most of these are new features, but then they have to integrate with the existing software so it's not really greenfield. (Not to mention that our clients aren't getting any faster at approving new features, either.)

willmadden 10 hours ago | parent [-]

Did you train a self-hosted/open source LLM on your existing software and documentation? That should make it far more useful. It's not claude code, but some of those models are 80% there. In 6 months they'll be today's claude code.

bandrami 9 hours ago | parent [-]

What would that help us with?

sdf2df 10 hours ago | parent | prev [-]

Its this kind of thinking that tells me people cant be trusted with their comments on here re. "Omg I can produce code faster and it'll do this and that".

No simply 'producing a feature' aint it bud. That's one piece of the puzzle.

Kye 11 hours ago | parent | prev | next [-]

I've taken to calling LLMs processors. A "Hello World" in assembly is about 20 lines and on par with most unskilled prompting. It took a while to get from there to Rust, or Firefox, or 1T parameter transformers running on powerful vector processors. We're a notch past Hello World with this processor.

The specific way it applies to your specific situation, if it exists, either hasn't been found or hasn't made its way to you. It really is early days.

sdf2df 11 hours ago | parent | prev [-]

I will personally say right now... its not gonna change lol.

People who actually know how to think can see it a mile away.

stevenhuang 8 hours ago | parent [-]

It's telling you feel the need to create a throw away to voice this opinion.

sdf2df 8 hours ago | parent [-]

1) Not a throaway, can't remember what my old account is called 2) Feel free to screen shot. Stick it on your desktop and set a reminder and check the state of the world in 12 months time.

Job done fella.

jaxn 7 hours ago | parent | next [-]

For some of us, the world has already changed drastically. I am shipping more code, better code, less buggy code WAY faster than ever before. Big systemic changes for the better to our infra as well. There are days where I easily do 2 weeks worth of my best work ever.

I totally understand that not everyone is having that experience. And yet until people live it, it seems they just discount the experience others are having.

I'll take the 12 month bet.

leptons 3 hours ago | parent | next [-]

>I am shipping more code, better code, less buggy code WAY faster than ever before.

It's clearly relative. For all we know you're a crap coder and AI is now your crutch. We have no evidence that with AI you are as good as an average developer with a fair amount of experience. And even if you do have a fair amount of experience, that doesn't mean you're a good coder.

salawat 5 hours ago | parent | prev [-]

Cool, and you're doing it on top of the single largest IP hijacking in the history of the world, a massive uptick in infra spend and energy burn to "just throw more compute" at it instead of figuring out how to throw "the right compute at it", cannibalization of the onboarding graduates, and losing having enough friction to keep you from running off after what's probably a bad idea on further analysis, because you can crank this out in a weekend. Last time somewhat did that, we got fucking JS. We still haven't rid ourselves of it.

Let us not lose sight of how we got here.

stevenhuang 8 hours ago | parent | prev [-]

12 months I won't be surprised if there's not much change. But in 5 years? 10? Anything can happen. It is presumptuous to think you can project the future capabilities of this technology and confidently state that labour markets will never be affected.

sdf2df 7 hours ago | parent [-]

You prove my point.

Guys like you dont get it. You think OAI, Amazon etc can freely put large amounts of money into this for 5-10 years? Lmao - delusional. Investors are impatient. Show huge jumps in revenue this year or you no longer have permission to put monumental amounts of money into this anymore.

Short of that they'll just destroy the stock price by selling off; leaving employees who get paid via SBC very unhappy.

dolebirchwood 7 hours ago | parent | next [-]

> You think OAI, Amazon etc can freely put large amounts of money into this for 5-10 years?

Won't matter. The Chinese models will be running on potatoes by then and be better than ever.

HWR_14 5 hours ago | parent | prev | next [-]

Whatever you want to say about other companies, Amazon (and Meta) is quite willing to spend many years pouring billions into technology they think will pay off later.

Ekaros 3 hours ago | parent [-]

Looking at VR and Meta. They absolutely can be wrong. So even after investing what seems to be enough, there might not be any payoff.

greyw 7 hours ago | parent | prev [-]

Such are reductive and superficial way of thinking on how investments works. Makes me confident you dont really are able to make a good prediction

tl2do 12 hours ago | parent | prev | next [-]

From my experience as a software engineer, doubling my productivity hasn’t reduced my workload. My output per hour has gone up, but expectations and requirements have gone up just as fast. Software development is effectively endless work, and AI has mostly compressed timelines rather than reduced total demand.

httpz 8 hours ago | parent | next [-]

There's a famous quote by a cyclist, "It never gets easier, you just go faster"

liuliu 12 hours ago | parent | prev | next [-]

It is not going to reduce your workload. It is going to remove one of your co-workers.

johnfn 12 hours ago | parent | next [-]

This seems unlikely. My company is in competition with a number of other startups. If AI removes one of my co-workers, our competitors will keep the co-worker and out-compete us.

danans 9 hours ago | parent | next [-]

> If AI removes one of my co-workers, our competitors will keep the co-worker and out-compete us.

This assumes that the companies' business growth is a function of the amount of code written, but that would not make much sense for a software company.

Many companies (including mine) are building our product with an engineering team 1/4 the size of what would have been required a few years ago. The whole idea is that we can build the machine to scale our business with far fewer workers.

majormajor 8 hours ago | parent [-]

How many companies have you worked at in the past where the backlog dried up and the engineering team sat around doing nothing?

Even in companies that are no longer growing I've always seen the roadmap only ever get larger (at that point you get desperate to try to catch back up, or expand into new markets, while also laying people off to cut costs).

Will we finally out-write the backlog of ideas to try and of feature requests? Or will the market get more fragmented as more smaller competitors can carve out different niches in different markets, each with more-complex offerings than they could've offered 5 years ago?

darth_avocado 11 hours ago | parent | prev | next [-]

> This seems unlikely

This is already happening. Fewer people are getting hired. Companies are quietly (sometimes not, like Block) letting people go. At a personal level all the leaders in my company are sounding the “catch up or you’ll be left behind” alarm. People are going to be let go at an accelerated pace in the future (1-3 years).

johnfn 11 hours ago | parent [-]

I don’t think that addresses my point. I understand a lot of companies are firing under the guise of AI, but it’s unclear to me whether AI is actually driving this - especially when the article we are both responding to says:

> We find no systematic increase in unemployment for highly exposed workers since late 2022

keeda 8 hours ago | parent | prev | next [-]

It depends on the "shape" of the company. Larger companies have a lot more of what I call "Conway Overhead", basically a mix of legit coordination overhead and bureaucracy. Startups by necessity have a lot less of that, and so are better "shaped" to fully harness AI.

vkou 12 hours ago | parent | prev | next [-]

> This seems unlikely.

It is absolutely likely. The hiring market for juniors is fucked atm.

Rury 11 hours ago | parent | next [-]

That's not necessarily a result of AI, you also have to consider the broader economic environment. I mean, it was also difficult to get a job as a graduate in 2008, whereas it's typically been easier to get a job when credit is cheap.

vkou 11 hours ago | parent [-]

It sure was, but as far as I'm aware, 2026 isn't in the middle of a generation-scale economic collapse.

(And if it is, what is the cause?)

majormajor 8 hours ago | parent | next [-]

Isn't it, for something like 70-80% of families? Just in slow-motion?

How long have we been hearing about crushing affordability problems for property? And how long ago did that start moving into essentials? The COVID-era bullwhip-effect inflation waves triggered a lot of price ratcheting that has slowed but never really reversed. Asset prices are doing great, as people with money continue to need somewhere to put it, and have been very effective at capturing greater and greater shares of productivity increases. But how's the average waiter, cleaning-business sole-proprietor, uber driver, schoolteacher, or pet supply shopowner doing? How's their debt load trending? How's their savings trending?

raddan 9 hours ago | parent | prev [-]

There’s a difference between a collapse and a slowdown. We don’t need a collapse for hiring to slow down [1,2]. I think we’re finally just seeing the maturation of software development. Software is increasingly a commodity, so maybe the era of crazy growth and hiring is over. I don’t think that we need AI to explain this either, although possibly AI will simply commodify more kinds of software.

[1] https://www.npr.org/2026/02/12/nx-s1-5711455/revised-labor-d...

[2] https://www.marketplace.org/story/2025/12/18/expect-more-of-...

majormajor 8 hours ago | parent | prev | next [-]

FAANG realizing that they can't make infinite money by expanding into every possible market while paying FAANG salaries for low-scale-CRUD-prototyping roles has a lot to do with this, and that started a bit earlier than the AI wave.

Lots going on right now in the market, but IMO that retreat is the biggest one still.

Many companies were basically on a path of infinite hiring between ~2011 and ~2022 until the rapid COVID-era whiplash really drove home "maybe we've been overhiring" and caused the reaction and slowdown that many had been predicting annually since, oh, 2015.

sdf2df 8 hours ago | parent [-]

You can't be a manager without anyone to manage.

There's a lot of perverse interests and incentives at play.

majormajor 8 hours ago | parent [-]

Manager gigs at FAANG are pretty rough right now in my network, you can't be a manager when the higher-ups notice your group isn't a big revenue generator and so doesn't justify new hires and bigger org charts, and cutting the middlemen is the easiest way to juice the ROI numbers. If the ICs that now have 1/3 the managerial structure and have to wear more hats don't turn things around, oh well, it's not a critical area anyway, just nuke it.

You can be an exec with 10-20% fewer random products/departments in your company, and maybe 40% fewer middle managers in the rest of them. You might even get a nice bonus for cutting all that cost! Bonuses for growth, bonuses for "efficiency" when the macro vibe shifts. Trim sails and carry on.

dvt 11 hours ago | parent | prev | next [-]

Because of overhiring during the post-COVID free money glitch, not because of AI.

johnfn 11 hours ago | parent | prev | next [-]

Aren't we both responding to an article which says:

> We find no systematic increase in unemployment for highly exposed workers since late 2022

sdf2df 11 hours ago | parent | prev | next [-]

Erm its been fucked for many years across many professions, it was just less so for software engineering in particular. Now entry into the S-E profession is taking a hit.

Also dont forget theres only so many viable revenue-generating and cost-saving projects to take. And said above - overhiring in COVID.

nozzlegear 11 hours ago | parent | prev [-]

It was fucked before AI became "mainstream" too. Companies overhired during and after covid.

gedy 11 hours ago | parent | prev [-]

There's definitely tone deaf statements from managers/leaders like "AI will allow us to do more with less headcount!" As if the end worker is supposed to be excited about that, knuckleheads, lol.

raddan 9 hours ago | parent [-]

Yeah I’ve been scratching my head about this too. Like, if my boss said this, I would basically start looking for a new job right then and there. Seems like a good way to drive off your own talent.

bicx 11 hours ago | parent | prev | next [-]

In a bear market in a bloated company, maybe. We’re still actively hiring at my startup, even with going all-in on AI across the company. My PM is currently shipping major features (with my review) faster and with higher-quality code than any engineer did last year.

kace91 9 hours ago | parent | next [-]

>My PM is currently shipping major features (with my review) faster and with higher-quality code than any engineer did last year

That's... not a good look for your engineers?

bicx 7 hours ago | parent [-]

It’s hard to compare, honestly. Last year, my PM didn’t have the AI tools to do any of this, and engineers were spread thin. Now, the PM (with a specialized Claude Code environment) has the enthusiasm of a new software engineer and the product instincts of a senior PM.

margorczynski 3 hours ago | parent [-]

This is how it will go at least in the near term. Engineers will be phased out slowly by product/project management that will prompt the tool instead of the tech lead for the changes they want.

And in the longer term those people will also get deprecated.

danans 9 hours ago | parent | prev [-]

> In a bear market in a bloated company, maybe

Then any company that was staffed at levels needed prior to the arrival of current-level LLM coding assistants is bloated.

If the company was person-hour starved before, a significant amount of that demand is being satisfied by LLMs now.

It all depends on where the company is in the arc of its technology and business development, and where it was when powerful coding agents became viable.

IsTom 12 hours ago | parent | prev [-]

Or just make time for more Very Important Meetings.

causal 11 hours ago | parent | prev | next [-]

This - I can't think of any place I've ever worked where development ever outpaced backlog and tech debt.

ipaddr 11 hours ago | parent [-]

When you work long enough you'll find it. Places where changing software is risky you can end up waiting for approvals. Places where another company purchased yours or you are getting shutdown soon and there is no new work. Sometimes you end up on a system that they want to replace but they never get around to it.

Being overworked is sometimes better than being underworked. Sometimes the reserve is better. They both have challenges.

majormajor 8 hours ago | parent [-]

Outside of purchased-and-being-shutdown, these are still frequently "we want to do things but we're scared of breaking things" situations, not "we don't want to do anything." Even if the things they want to do are just "we want to move off this 90s codebase before everyone who knows how it works is dead."

In that sort of high-fear, change-adverse environment "get rid of all the devs and let the AI do it" may not be the most compelling sales pitch to leadership. ("Use it to port the code faster so we can spend more time on the migration plan and manual testing" might have better luck.)

byproxy 11 hours ago | parent | prev | next [-]

See: https://en.wikipedia.org/wiki/Jevons_paradox

andai 11 hours ago | parent [-]

Worst time to be an employee, as you are expected to work faster and faster. (The approach is very much quantity over quality.)

Best time to be a solo founder in underserved markets :)

MeetingsBrowser 12 hours ago | parent | prev | next [-]

The goal has always and will always be to complete as much as possible in the time allotted.

api 11 hours ago | parent | prev [-]

That’s the economy in general. Labor saving innovations increase productivity but do not usually reduce work very much, though they can shift it around pretty dramatically. There are game theoretic reasons for this, as well as phenomena like the hedonic treadmill.

darth_avocado 11 hours ago | parent [-]

Ideal state for every company is to have minimum input costs with maximum output costs. Labor always gets cut out of the loop because it’s one of the most expensive input costs.

gadders 12 minutes ago | parent | prev | next [-]

If people think Elite Overproduction (https://en.wikipedia.org/wiki/Elite_overproduction) is causing strife now, wait until tens of thousands of people with degrees get thrown out of work.

ChrisMarshallNY 9 hours ago | parent | prev | next [-]

I'm working on a project right now, that is heavily informed by AI. I wouldn't even try it, if I didn't have the help. It's a big job.

However, I can't imagine vibe-coders actually shipping anything.

I really have to ride herd on the output from the LLM. Sometimes, the error is PEBCAK, because I erred, when I prompted, and that can lead to very subtle issues.

I no longer review every line, but I also have not yet gotten to the point, where I can just "trust" the LLM. I assume there's going to be problems, and haven't been disappointed, yet. The good news is, the LLM is pretty good at figuring out where we messed up.

I'm afraid to turn on SwiftLint. The LLM code is ... prolix ...

All that said, it has enormously accelerated the project. I've been working on a rewrite (server and native client) that took a couple of years to write, the first time, and it's only been a month. I'm more than half done, already.

To be fair, the slow part is still ahead. I can work alone (at high speed) on the backend and communication stuff, but once the rest of the team (especially shudder the graphic designer) gets on board, things are going to slow to a crawl.

Mengkudulangsat 6 hours ago | parent | next [-]

> However, I can't imagine vibe-coders actually shipping anything.

I'm a vibe-coder, and I've shipped lots! The key is to vibe-code apps that has a single user (me). Haven't coded anything for 15 years prior to January too.

ChrisMarshallNY an hour ago | parent | next [-]

I suspect we have different definitions of “ship.”

I am usually my principal customer, but I tend to release publicly.

hunterpayne an hour ago | parent | prev [-]

So your the dev who wrote Tea then huh

enraged_camel 8 hours ago | parent | prev | next [-]

>> I no longer review every line, but I also have not yet gotten to the point, where I can just "trust" the LLM.

Same here. This is also why I haven't been able to switch to Claude Code, despite trying to multiple times. I feel like its mode of operation is much more "just trust to generated code" than Cursor, which let's you review and accept/reject diffs with a very obvious and easy to use UX.

majormajor 8 hours ago | parent [-]

Most of the folks I work with who uninstalled Cursor in favor of Claude Code switched back to VSCode for reviewing stuff before pushing PRs. Which... doesn't actually feel like a big change from just using Cursor, personally. I tried Claude Code recently, but like you preferred the Cursor integration.

I don't have the bandwidth to juggle four independent things being worked on by agents in parallel so the single-IDE "bottleneck" is not slowing me down. That seems to work a lot better for heavy-boilerplate or heavy-greenfield stuff.

I am curious about if we refactored our codebase the right way, would more small/isolatable subtasks be parallelizable with lower cognitive load? But I haven't found it yet.

MattGaiser 4 hours ago | parent | prev [-]

Is there a reason you don’t use a hook to make all code pass your linters before you look at it?

ChrisMarshallNY 2 hours ago | parent [-]

I’m probably gonna do that (I use SwiftLint[0] -Note: I no longer use CocoaPods anything, these days. I wrote that, years ago.), but I tend to be quite strict, and didn’t want to be constantly interrupting myself, polishing turds. It was really kind of a joke.

I haven’t turned it on, yet, because of the velocity of the work, but I think I’ve found my stride.

[0] https://littlegreenviper.com/swiftlint/

behnamoh 12 hours ago | parent | prev | next [-]

I don't think there's been much of an impact, really. Those who know how to use AI just got tangentially more productive (because why would you reveal your fake 10x productivity boost so your boss hands you 10x more tasks to finish?), and those w/o AI knowledge stayed the way they were.

The real impact is for indie-devs or freelancers but that usually doesn't account for much of the GDP.

piyh 12 hours ago | parent | next [-]

Work is freezing hiring and upping spending on tokens for everyone.

Don't know if this is effective and I don't think management knows either, but it's what they're doing

re-thc 12 hours ago | parent [-]

> Work is freezing hiring and upping spending on tokens for everyone.

Doesn't mean the two are related.

Is AI just the excuse? We've got tariffs, war, uncertainty and other drama non stop.

piyh 12 hours ago | parent [-]

It's what they're telling us

moregrist 9 hours ago | parent | next [-]

Of course they are.

Management often has a perverse short-term incentive to make labor feel insecure. It’s a quick way to make people feel insecure and work harder ... for a while.

Also, “AI makes us more productive so we can cut our labor costs” sounds so much better to investors than some variation of “layoffs because we fucked up / business is down / etc”

shimman 12 hours ago | parent | prev | next [-]

You should look into the concepts of skepticism, materialism, and cynicism. Maybe don't trust the leadership of where you work, the leadership that sees you as a number and not a human.

vurudlxtyt 5 hours ago | parent | prev | next [-]

Do you believe everything management tells you, whether you’re internal or external?

pydry 12 hours ago | parent | prev [-]

Which story sends a more positive signal to shareholders?

"We've frozen hiring because our growth potential is tapped out."

"We've frozen hiring because AI can replace employees."

thewhitetulip 7 hours ago | parent | prev | next [-]

If everyone was 10x productive then we would have had native Claude Code app for each platform.

Instead they are using Electron and calling it a day. Very ironic isn't it? If AI is so good then why don't we get native software from Anthropic?

citrin_ru 3 hours ago | parent [-]

Shift from quality to velocity started not yesterday but AI only accelerated this shift. Majority of comments here tell how fast Claude can generate code, not how good the result is. Electron is the prefect fit for move fast mindset. It is likely that Claude developers don’t see electron as problem at all.

thewhitetulip a minute ago | parent [-]

I understand why they used Electron. But since AI is so amazing they should be using Claude to generate super optimized native software! They do claim that Claude is better than humans and AI is replacing programmers any day now.

So why aren't they using their own software to generate a linux optimized package for linux, a Swift software for MacOS and whatever windows uses.

That would be the best ad for AI. See, we use our own product!

But it doesn't happen.

rishabhaiover 12 hours ago | parent | prev [-]

I'd be curious to see the shift in numbers since December, 2025.

g947o 12 hours ago | parent | prev | next [-]

I am not going to trust a single word from a company whose business is selling you AI products.

SamuelAdams 7 hours ago | parent | next [-]

I also thought it was hilarious that they invented a brand new metric that (surprise) makes their product’s long term projection look really good (financially).

marginalia_nu 12 hours ago | parent | prev [-]

... and eyeing an IPO.

holografix 10 hours ago | parent | prev | next [-]

One of the more interesting takes I heard from a colleague, who’s in the marketing department, is that he uses the corporate approved LLM (Gemini) for “pretend work” or very basic tasks. At the same time he uses Claude on his personal account to seriously augment his job.

His rationale is he won’t let the company log his prompts and responses so they can’t build an agentic replacement for him. Corporate rules about shadow it be damned.

Only the paranoid survive I guess

avidiax 3 hours ago | parent [-]

It's not his company that will train using his prompts. It's the personal account, unless it is fully paid and he's opted out of training on his prompts.

ungovernableCat an hour ago | parent [-]

>opted out of training on his prompts

I’d argue this can’t be trusted either considering the AI labs already established they’re willing to break laws (copyright) if the ultimate legal consequence is just a small fine or settlement.

zmmmmm 6 hours ago | parent | prev | next [-]

the numbers they show are barely distinguishable from noise as far as I can interpret them.

For me, the impact is absolutely in hiring juniors. We basically just stopped considering it. There's almost no work a junior can do that now I would look at and think it isn't easier to hand off in some form (possibly different to what the junior would do) to an AI.

It's a bit illusory though. It was always the case that handing off work to a junior person was often more work than doing it yourself. It's an investment in the future to hire someone and get their productivity up to a point of net gain. As much as anything it's a pause while we reassess what the shape of expertise now looks like. I know what juniors did before is now less valuable than it used to be, but I don't know what the value proposition of the future looks like. So until we know, we pause and hold - and the efficiency gains from using AI currently are mostly being invested in that "hold" - they are keeping us viable from a workload perspective long enough to restructure work around AI. Once we do that, I think there will be a reset and hiring of juniors will kick back in.

chii 6 hours ago | parent [-]

Doesn't make sense to stop hiring juniors.

If AI increases productivity, and juniors are cheaper to hire, but is just as able to hand off tasks to ai as a senior, then it makes more sense to hire more juniors to get them working with an AI as soon as possible. This produces output faster, for which more revenue could be derived.

So the only limiting factor is the possibility of not deriving more revenue - which is not related to the AI issue, but broader, macroeconomic issue(s).

jakobnissen 5 hours ago | parent | next [-]

Juniors are not as capable of delegating to AI as seniors are. Delegation to AI requires code review, catching the AI when it doesn’t follow good engineering practices, and catching the AI in semantic mistakes due to the AIs lack of broader context. Those things are all hard for juniors.

ragebol 2 hours ago | parent [-]

Isn't that the point, to be able to learn that.

The craft changes with all these AI helpers, so the juniors have to also catch up/change with it. Or there won't b any seniors in due time.

zmmmmm 5 hours ago | parent | prev [-]

> but is just as able to hand off tasks to ai

I think this is the crux of it. Someone who doesn't know the right thing to do just isn't in a position to hand off anything. Accelerating their work will just make them do the wrong thing faster.

gentleman11 2 hours ago | parent | prev | next [-]

I know kids avoiding many high paying careers because of ai right now, and artists just giving up everywhere i look. Thanks, ai

mikkupikku 10 minutes ago | parent [-]

Art should be done foremost because it's a passion for the artist. If you give up art just because you can no longer sell it, because you're being out competed by computer generated furry porn, then the world hasn't really lost anything of value.

ausbah a minute ago | parent [-]

that seems a bit harsh. if you’re livelihood has been from making some sort of visual art for however many years and work has even drying up bc of AI, just doing a sudden career pivot is pretty difficult.

sanex 6 hours ago | parent | prev | next [-]

I know multiple devs who would have a very large productivity increase but instead choose to slow down their output on purpose and play video games instead. I get it.

tayo42 6 hours ago | parent [-]

This is what in praise of idleness is about.

amelius an hour ago | parent | prev | next [-]

Productivity up by 10%. Happiness, life satisfaction and feeling of self-worth down by 20%.

zthrowaway 12 hours ago | parent | prev | next [-]

My day to day is even busier now with agents all over the place making code changes. The Security landscape is even more complex now overnight. The only negative impact I see is that there’s not much need for junior devs right now. The agent fills that role in a way. But we’ll have to backfill some way or another.

nitwit005 11 hours ago | parent | prev | next [-]

The problem with using unemployment as a metric is hiring is driving by perception. You're making an educated guess as to how many people you need in the future.

Anthropic can cause layoffs through pure marketing. People were crediting an Anthropic statement in causing a drop in IBM's stock value, which may genuinely lead to layoffs: https://finance.yahoo.com/news/ibm-stock-plunges-ai-threat-1...

We'll probably have to wait for the hype to wear off to get a better idea, but that might take a long while.

pixl97 11 hours ago | parent [-]

Between 2004 and 2008 I did many things in computing as a company that offered my services, one of these was information gathering automation. It almost never immediately lead to decreases in employment. The systems had a to be in place for a while, people had to get used to them, people had to stop making common mistakes with them.

Then the 2008 crash happened and those people were gone in a blink of an eye and never replaced. The companies grew in staff after that, but it was in things like sales and marketing.

nitwit005 10 hours ago | parent [-]

I'm afraid I can't find the connection between this and what I wrote.

sp4cec0wb0y 12 hours ago | parent | prev | next [-]

My speed shipping software increased but so did the demands of features by my company.

sdf2df 11 hours ago | parent | next [-]

I don't really get this TBH.

Shipping speed never/is was the issue. Most companies are terrible at figuring out what exactly they should be allocating resources behind.

Speeding up does not solve the problem that most humans who are at the top of the hierarchy are poor thinkers. In fact it compounds it. More noise, nice.

thewhitetulip 7 hours ago | parent [-]

Yep requirement gathering takes forever Then validation takes forever

Writing code is lesser problem than figuring out what we want when we want, and to get stakeholders at one place.

sdf2df 7 hours ago | parent [-]

Finally, a fella who gets it.

Apple has already shown this decades ago - they got the iPhone and iPod developed and out the door in relatively short-time scales given the impact of the products on the world. Once you know what you want, exactly what you want, things moves fast - really fast.

thewhitetulip 4 hours ago | parent [-]

I've seen biz team taking 3 4 months to do UAT

But sure let's buy 200$ per month claude to ship things faster lol

MeetingsBrowser 12 hours ago | parent | prev | next [-]

Or worse. I’ve heard stories from friends where leadership expects huge boosts in productivity due to LLMs, and perceive anything but an order of magnitude boost as incompetence or a refusal to adapt.

mikkupikku 14 minutes ago | parent [-]

To be fair, some of it might genuinely be refusal to adapt? If we go by HN comments, there definitely do seem to be at least some people who are letting their hangups prevent them from learning this tech.

> It can't think, it just predicts likely tokens

> I can't believe this industry I once cherished for rational professionalism has fallen for nondeterministism

> Sorry, I'm just not going to participate in destroying the planet with these power hunger DCs

> All this stuff actually costs 10x what a human developer costs but they're dumping the service at a low price to make us dependent.

> It's a bubble, or a scam, in a year or two everything will go back to normal.

Tell me sentiments like these don't get bandied about by devs who want to keep doing things the way they know and like.

22c 12 hours ago | parent | prev [-]

PMs can now also ship their half-baked requirements documents even faster thanks to the help of AI.

boxedemp 8 hours ago | parent | prev | next [-]

I think it really depends what you're working on. I do some consulting and found it's not helping the C++ devs as much it's helping the html/js devs.

dm270 3 hours ago | parent [-]

Finally someone that says it. I think it’s a multi-variable problem as it handles languages differently. Also working in legacy code can be worse more often than not.

rishabhaiover 12 hours ago | parent | prev | next [-]

> There's suggestive evidence that hiring of young workers (ages 22–25) into exposed occupations has slowed — roughly a 14% drop in the job-finding rate

There goes my excuse of not finding a job in this market.

reactordev an hour ago | parent | prev | next [-]

I call BS on this as the ones displaced aren’t in the workforce anymore. I haven’t been able to work in over a year. Despite me applying to over 200 jobs a month.

rando1234 2 hours ago | parent | prev | next [-]

Has this been peer-reviewed?

ausbah 5 hours ago | parent | prev | next [-]

this keeps me up at night. i’m in a role that is essentially deployment management for LLMs at faang esque company. very little coding or need to code, mostly navigating guis, pipelines, and docker to get deployments updated with a new venting or model version or some patch

recursivedoubts 10 hours ago | parent | prev | next [-]

A possible outcome of AI: domestic technical employment goes up because the economics of outsourcing change. Domestic technical workers working with AI tools can replace outsourcing shops, eliminating time-shift issues, etc at similar or lower costs.

default-kramer 4 hours ago | parent | prev | next [-]

I'm not really concerned about the availability of SW dev jobs, but I am concerned about the quality of them. For many companies the velocity (and quality, much to my chagrin) of the code you can produce doesn't really matter. What matters more is whether or not you're building the right thing, and too often you're not. These companies also tend to keep more headcount than seems justified, I think because they are gambling that a few employees are going to do something awesome but they don't know which ones. As AI gets better what will these companies do? I don't think they will fire a bunch of SW devs. I think instead they will embrace the slop and just take more shots, and crazier shots. It doesn't just give us something to do, it also gives a bunch of PHBs something to do.

bob1029 3 hours ago | parent | prev | next [-]

> Claude is extensively used for coding, Computer Programmers are at the top, with 75% coverage

I think there are some advantages to being first.

It's time to re-evaluate strategies if we've been operating under the assumption that this is going to be a bubble, or otherwise largely bullshit. It definitely works. Not everywhere all the time, but often enough to be "scary" now. Some of my prior dismissals like "text 2 sql will never work" are looking pale in the face today.

mikkupikku 19 minutes ago | parent [-]

It is a bubble. It also works. Remember the Dotcom bubble, the internet did work but that doesn't mean there wasn't also a real bubble.

andai 11 hours ago | parent | prev | next [-]

How is Anthropic getting this data? Are they running science experiments on people's chat history? (In the app, API or both?)

boxedemp 8 hours ago | parent [-]

I'm sure they're collecting all kinds of insights from the prompts.

synelabs 5 hours ago | parent | prev | next [-]

I'm an SDE with 1 YOE using AI tools heavily (doing "day's work" in ~2 hrs, perfect reviews). Spending most time on specs/review vs. raw coding. Worried I'm optimising short-term output over long-term skill development. Should I consider pivoting to AI/ML roles? Would love advice from anyone who's hired juniors in the current era.

alexchantavy 5 hours ago | parent [-]

Data point of 1: Having hired juniors as a startup founder, I need more generalists than AI/ML specialists. AI application work right now is basically standard software engineering - you’re finding clever ways to supply the right context to a model within certain constraints.

No one knows what’s going to happen in the future. Yes there already are fewer SWE jobs than before because of AI, and yes the days of companies hiring new grads in droves at $300k+ packages are likely over. IMO all you can really do is study what you’re interested in, learn it deeply, and do good work with cool people. If unsure, it’s possible to go back to what you were doing before if the new path doesn’t work out.

Noyra-X 3 hours ago | parent | prev | next [-]

What's interesting from a practical standpoint: the paper confirms what we're seeing in SME deployments – AI augments, not replaces. But the real productivity gain only kicks in when you redesign the process around the AI, not just bolt it on. Most small businesses skip that step entirely and then wonder why their 'AI tool' isn't delivering. The organizational restructuring is the hard part, not the technology. Anyone here seen teams actually get this right systematically?

keeda 9 hours ago | parent | prev | next [-]

This rhymes with another recent study from the Dallas Fed: https://www.dallasfed.org/research/economics/2026/0224 - suggests AI is displacing younger workers but boosting experienced ones. This matches what we see discussed here, as well as the couple similar other studies we've seen discussed here.

Also, it seems to me the concept of "observed exposure" is analogous to OpenAI's concept of "capability overhang" - https://cdn.openai.com/pdf/openai-ending-the-capability-over...

I think the underlying reason is simply because companies are "shaped wrong" to absorb AI fully. I always harp on how there's a learning curve (and significant self-adaptation) to really use AI well. Companies face the same challenge.

Let's focus on software. By many estimates code-related activities are only 20 - 60%, maybe even as low as 11%, of software engineers' time (e.g. https://medium.com/@vikpoca/developers-spend-only-11-of-thei...) But consider where the rest of the time goes. Largely coordination overhead. Meetings etc. drain a lot of time (and more the more senior you get), and those are mostly getting a bunch of people across the company along the dependency web to align on technical directions and roadmaps.

I call this "Conway Overhead."

This is inevitable because the only way to scale cognitive work was to distribute it across a lot of people with narrow, specialized knowledge and domain ownership. It's effectively the overhead of distributed systems applied to organizations. Hence each team owned a couple of products / services / platforms / projects, with each member working on an even smaller part of it at a time. Coordination happened along the heirarchicy of the org chart because that is most efficient.

Now imagine, a single AI-assisted person competently owns everything a team used to own.

Suddenly the team at the leaf layer is reduced to 1 from about... 5? This instantly gets rid of a lot of overhead like daily standups, regular 1:1s and intra-team blockers. And inter-team coordination is reduced to a couple of devs hashing it out over Slack instead of meetings and tickets and timelines and backlog grooming and blockers.

So not only has the speed of coding increased, the amount of time spent coding has also gone up. The acceleration is super-linear.

But, this headcount reduction ripples up the org tree. This means the middle management layers, and the total headcount, are thinned out by the same factor that the bottom-most layer is!

And this focused only on the engineering aspect. Imagine the same dynamic playing out across departments when all kinds of adjacent roles are rolled up into the same person: product, design, reliability...

These are radical changes to workflows and organizations. However, at this stage we're simply shoe-horning AI into the old, now-obsolete ticket-driven way of doing things.

So of course AI has a "capability overhang" and is going to take time to have broad impact... but when it does, it's not going to be pretty.

nl 11 hours ago | parent | prev | next [-]

This is a pretty interesting report.

The TL;DR is that there is little measurable impact (and I'd personally add "yet").

To quote:

"We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations"

My belief based on personal experience is that in software engineering it wasn't until November/December 2025 that AI had enough impact to measurably accelerate delivery throughout the whole software development lifecycle.

I have doubts that this impact is measurable yet - there is a lag between hiring intention and impact on jobs, and outside Silicon Valley large scale hiring decisions are rarely made in a 3 month timeframe.

The most interesting part is the radar plot showing the lack of usage of AI in many industries where the capability is there!

jiggawatts 9 hours ago | parent [-]

> My belief based on personal experience is that in software engineering it wasn't until November/December 2025 that AI had enough impact to measurably accelerate delivery throughout the whole software development lifecycle.

Gemini 3 and Opus 4.6 were the "woah, they're actually useful now!" moment for me.

I keep saying to colleagues that it's like a rising tide. Initially the AIs were lapping around our ankles, now the level of capability is at waist height.

Many people have commented that 50% of developers think AI-generated code is "Great!" and 50% think its trash. That's a sign that AI code quality is that of the median developer. This will likely improve to 60%-40%, then 70%-30%, etc...

falkensmaize 7 hours ago | parent [-]

I don’t see definitive evidence that there is some kind of Moore’s law for model improvement though. Just because this year’s model performs better than last year’s model doesn’t mean next year’s model will be another leap. Most of the big improvements this year seem to be around tooling - I still see Opus 4.6 (which is my daily driver at work) making lots of mistakes.

nl 3 hours ago | parent [-]

Things like the METR benchmark aren't sufficient?

I mean Moore's law is just a rule of thumb but the curve fits METR just as well..

nickphx 12 hours ago | parent | prev | next [-]

You know you're having a real impact when you have to self-report on the impact you're having.

thewhitetulip 7 hours ago | parent | prev | next [-]

Did you all read about the aws outage for 13hrs because their autonomous AI agent decided to delete everything and write from scratch?

ares623 3 hours ago | parent [-]

and they blamed the engineer because insurance doesn't pay for AI failures

geuis 6 hours ago | parent | prev | next [-]

I really hate to say it, but this article in particular needs a tldr. The author does a web recipe take. Don't put the actual factual info upfront and require parsing through everything to find anything important.

Kinda done with this.

If you have something important to say, say it up front and back it up with literature later.

thatmf 11 hours ago | parent | prev [-]

cigarettes don't cause cancer! -cigarette companies

keeda 8 hours ago | parent [-]

Except this is the company that's been saying "We will cause cancer, please regulate us!"