Remix.run Logo
aurareturn 2 days ago

Why is #1 hypothetical?

If 1 employee can do the work of 3 now but Meta's TAM can't grow 300%, then they can cut some employees.

In other words, worker productivity might be higher than what the ad business can grow into, so Meta can safely cut cost and still hit their growth targets.

Edit: I should be clear that I think #1 has been achieved for software development.

bandrami 2 days ago | parent | next [-]

Because I think "1 employee can do the work of 3 now" still hasn't actually been demonstrated

shawabawa3 2 days ago | parent | next [-]

> Because I think "1 employee can do the work of 3 now" still hasn't actually been demonstrated

1 employee doing the work of 3 is I think is a stretch

but 1 employee doing the work of 1.1 employees from a year ago I think is almost certainly true - at least, me and everyone i work with is _at least_ 10% more productive, and using AI extensively

steveBK123 2 days ago | parent [-]

Right I think orgs are unclear how to wield this yet though

In my 20 year career I’ve rarely been on a team with more than 3-5 people on a team or within region on a team.

So at that scale it’s not really reducing a team member on a given team still. But you get more productive which is notoriously hard to measure in SWE, so yeah. It’s possible that translates to iterating faster or closing tickets further down the backlog which is useful but not per-se staff reducing.

Maybe in mag7 where you have massive engineering orgs the 10% can impact a given team more..

Barbing 2 days ago | parent | prev | next [-]

I wonder what proportion[1] of knowledge workers believe they have at least one colleague who the business would be better off replacing with software

and how many of them are totally wrong, or right about it!

[1] and how it might be changing with new generations of models

bandrami 2 days ago | parent [-]

I think the bigger issue is employees who can largely just not be replaced.

For all the hype about the 1X vs 10X distinction the real stumbling block is how many 0Xes there are out there and how frequently they tend to make it through hiring.

jimbokun 2 days ago | parent | prev | next [-]

We’re seeing multiple reports now of the number of PRs being closed increasing by 1.5x and that sounds about right.

amelius 2 days ago | parent | prev | next [-]

Ask professional translators.

DonsDiscountGas 2 days ago | parent | prev [-]

Pretty sure it has for coding

oytis 2 days ago | parent | next [-]

Personally, no, haven't seen it in professional settings. Colleagues using AI heavily did not show any superhuman development speed, but definitely increased review burden compared to writing code by hand.

bandrami 2 days ago | parent | prev | next [-]

I'm not seeing it if that's true. None of my vendors, none of my open source upstreams, and our own dev team haven't actually been getting software out faster than they were a couple of years ago. As a sysadmin docker had a bigger impact on my work in its first few years than LLMs have in theirs.

2 days ago | parent | prev [-]
[deleted]
hansmayer 2 days ago | parent | prev | next [-]

> f 1 employee can do the work of 3 now but Meta's TAM can't grow 300%

If you go by the measure of LoC per employee, then your number is probably even higher, somewhere between 10-20x per employee. The problem being, producing 10.000 lines of AI-slop per day is not a good productivity measure - all it does is create more technical debt and issues that now nobody is reviewing because a) people get fatigued and at some point just wave the AI-slop through b) there is not enough manpower because people got laid off because of "AI" c) People are generally feeling irritated by being asked to review and correct AI slop. There is a societal pushback brewing and it won't be nice for the so-called AI in the end. Think about the fact that most people who are exhilirated by the "AI" are either incompetent or incompetent and old. Most of the young folks, even those not in the technical domains, firmly reject AI. When did you ever hear of a revolutionary new tech that was actively hated on by the young people?

> Edit: I should be clear that I think #1 has been achieved for software development.

Maybe in the world of WP-plugins/typo3 and other simple work, though even those are fairly complex in their own ways which the retard-LLMs will trip on fair amount of times. Not if you are doing anything remotely complex. The retard-LLMs will still either put your secrets in plain text, suggest the laziest f*ing implementation of a problem etc. It's just a shitshow nowadays, compounded by the LLM companies trying to keep the costs low (and therefore keep the "users" hooked), which they currently accomplish by shortchanging you and dumbing the LLMs down - because otherwise they'd have to charge for true cost - upwards of tens of thousands of dollars per seat - which would render their initial value proposition completely useless. Something has to give.

zeroonetwothree 2 days ago | parent | next [-]

I have seen some internal data from one large company that LOC went up 100% but actual features shipped to customers went up only around 10%.

Now not all the extra code is necessarily useless (you can imagine some refactoring or perf improvements) but clearly a lot of it is.

We still aren’t very good at knowing how to use AI judiciously.

tren_hard 2 days ago | parent | prev | next [-]

funny you say this, my meta swe friends have been talking about how many more loc they have been producing since they were given llms tens of thousands more. the thing i have found most interesting is how that seems to be the only metric that actually matters to them. just generating more and more.

hansmayer 2 days ago | parent [-]

Give a man a hammer...and tell him his performance review outcome is tied to his use of that hammer -> then watch every problem being, for the lack of a better term, nailed down . With the said hammer of course.

jeremyjh 2 days ago | parent | prev [-]

> Maybe in the world of WP-plugins/typo3 and other simple work, though even those are fairly complex

This was a reasonable position to hold 9 months ago but it’s absurd now. I’m not going to convince you - but you really should give it a try.

bandrami 2 days ago | parent | next [-]

The thing is, in 9 months you'll still say that, talking about now. I know this because I've been hearing this for two years.

jeremyjh a day ago | parent [-]

I was saying the same thing you are 9 months ago. It isn't the same people that have been saying this for the last two years. Every three to six months there is a new cohort.

hansmayer 2 days ago | parent | prev [-]

> but you really should give it a try.

That's cute, I hope you enjoy that high, it's really impressive at first - fyi -I've been using GH Copilot since early days (invited to early access) AND paying it for my entire company ever since MS published the first commercial plan. All the way to the latest entshittification drama with Opus 4.6 being pulled away and Opus 4.7 taxed at 7.5x rate. Yeah, its great for quick tryouts or similar. But using it in wider scope, with complex reqs and dynamic environment? Complete shitshow.

> This was a reasonable position to hold 9 months ago

Browse my comment history. There were people just like you, 6, 9 or 12 months ago telling me exactly the same. Some also threatening that I would "be the first to go away". Like I said, cute actually :) You know in January Dario Amodei announced again, AI would write ALL code in 6 months. Do you see it happening?

jeremyjh a day ago | parent [-]

>That's cute,

What kind of smug asshole says something like this?

If you think early days of GH Copilot are remotely relevant to what is happening now, we do not live in the same world.

hansmayer a day ago | parent [-]

> If you think early days of GH Copilot are remotely relevant to what is happening now

Let's learn some English grammar, I had said: "I've been using GH Copilot since early days" . Translated from plain English: "have been using..." = *an action or state starting in the past AND continuing UP TO the present time*. I did not say "I used it only during the early days and never again". Plus my team. Honestly - and it's only a hypothesis at this point in time I cannot prove, but I meet more and more people whose usage of LLMs seems to be seriously impacting their reading comprehension and writing skills. Do some LLM-detox, I mean this without any kind of disrespect.

> What kind of smug asshole says something like this?

I've also noticed over the years, the folks who get this offended are usually those who are uncertain about their opinion. I simply found your statement cute, that's all. You are obviously in the early stages of using it and I am assuming using it to alleviate your own little, relatively simple tasks in your job which otherwise would have been overburdening for you. It's only human to try and do so, but it hardly describes "productivity" the way you seem to think it does.

jeremyjh 21 hours ago | parent [-]

> You are obviously in the early stages of using it

I am not. You do not have any knowledge about what I've done or for how long. The arrogance is truly breathtaking. You should work on it.

I've also used GH Copilot since early days, ChatGPT since launch. I did not start using agentic systems (mostly Cursor) until about a year ago, and found them to be of little help in generating code in our large, long-lived and sprawling code base. Something fundamentally changed with Opus 4.5 but I didn't notice it for quite awhile because I had already constrained my usage patterns to what I thought I knew to be their limits.

> little, relatively simple tasks in your job which otherwise would have been overburdening for you

I have done that, and many other things. You do not know anything at all about what I have done with AI. You are projecting your ignorance in a way that makes sense to you, but it is still nothing but ignorance.

If you are actually interested in my recent observations I posted this in another thread recently.

https://news.ycombinator.com/reply?id=47897218

hartbook 2 days ago | parent | prev [-]

if you start a sentence by "if" that's the hypothetical