Remix.run Logo
computerphage 13 hours ago

Why isn't the AI story believable? It seems to me that AI is getting more and more productive

mrwaffle 12 hours ago | parent | next [-]

Sure but the lower hanging fruit is mostly squeezed, so what else is driving the idea of _job replacement_ if the next branch up of the tree is 3-5 years out? I've seen very little to indicate beyond tooling empowering existing employees a major jump in productivity but nothing close to job replacement (for technical roles). Often times it's still accruing various forms of technical debt/other debts or complexities. Unless these are 1% of nontechnical roles it doesn't make much sense other than their own internal projection for this year in terms of the broader economy. Maybe because they have such a larger ship to turn that they need to actually plan 2-3 years out? I don't get it, I still see people hire technical writers on a daily basis, even. So what's getting cut there?

bopbopbop7 13 hours ago | parent | prev | next [-]

Is there any quantitative evidence for AI increasing productivity? Other than AI influencer blog posts and pre-IPO marketing from AI companies?

medvezhenok 12 hours ago | parent | next [-]

What exactly would that evidence look like, for you?

It definitely increases some types of productivity (Opus one-shot a visualization that would have likely taken me at least a day to write before, for work) - although I would have never written this visualization before LLMs (because the effort was not worth it). So I guess it's Jevons Paradox in action somewhat.

In order to observe the productivity increases you need a good scale where the productivity would really matter (the same way that when a benchmark is saturated, like the AIME, it stops telling us anything useful about model improvement)

bwestergard 11 hours ago | parent | next [-]

"What exactly would that evidence look like, for you?"

https://fred.stlouisfed.org/series/MFPPBS https://fred.stlouisfed.org/series/OPHNFB

Productivity is by definition real output (usually inflation adjusted dollars) per unit of input. That could be per hour worked, or per representative unit of capital + labor mix.

I would accept an increase in the slope of either of these lines as evidence of a net productivity increase due to artificial intelligence (unless there were some other plausible cause of productivity growth speed up, which at present there is not).

no_wizard 10 hours ago | parent [-]

There are two sides to this that I see:

First, I'd expect the trajectory of any new technology that purports to be the next big revolution in computing to follow a distribution pattern of that similar to the expansive use of desktop computing and productivity increases, such as the 1995-2005 period[0]. There has not been any indication of such an increase since 2022[1] or 2023[2]. Even the most generous estimation, which Anthropic itself estimated in 2025 the following

>Extrapolating these estimates out suggests current-generation AI models could increase US labor productivity growth by 1.8% annually over the next decade[3]

Which not only assumes the best case scenarios, but would fail to eclipse the height of the computer adoption in productivity gains over a similar period, 1995-2005 with around 2-2.5% annual gain.

Second is cost. The actual cost of these tools is multiples more expensive than it was to adopt computing en masse, especially since 1995. So any increase in productivity they are having is not driving overall costs down relative to the gains, in large part because you aren't seeing any substantial YoY productivity growth after adopting these AI tools. Computing had a different trend, as not only did it get cheaper over time, the relative cost was outweighed by the YoY increase of productivity.

[0]: https://www.cbo.gov/sites/default/files/110th-congress-2007-...

[1]: First year where mass market LLM tools started to show up, particularly in the software field (in fact, GitHub Copilot launched in 2021, for instance)

[2]: First year where ChatGPT 4 showed up and really blew up the awareness of LLMs

[3]: https://www.anthropic.com/research/estimating-productivity-g...

bopbopbop7 12 hours ago | parent | prev [-]

Well you would think if there is increased productivity there would be at least a couple studies, some clear artifacts, or increased quality of software being shipped.

Except all we have is "trust me bro, I'm 100x more productive" twitter/blog posts, blant pre-IPO AI company marketing disguised as blog posts, studies that show AI decreases productivity, increased outages, more CVEs, anecdotes without proof, and not a whole lot of shipping software.

10 hours ago | parent | prev [-]
[deleted]
chankstein38 12 hours ago | parent | prev | next [-]

If that's the case I feel like you couldn't actually be using them or paying attention. I'm a big proponent and use LLMs for code and hardware projects constantly but Gemini Pro and ChatGPT 5.2 are both probably the worst state we've seen. 6 months ago I was worried but at this point I have started finding other ways to find answers to things. Going back to the stone tablets of googling and looking at Stackoverflow or reddit.

I still use them but find that more of the time is spent arguing with it and correcting problems with it than actually getting any useful product.

moshegramovsky 6 hours ago | parent [-]

> I still use them but find that more of the time is spent arguing with it and correcting problems with it than actually getting any useful product.

I feel the same. They're better at some things yes, but also worse at other things. And for me, they're worse at my really important use cases. I could spend a month typing prompts into Codex or AntiGravity and still be left holding the bag. Just yesterday I had a fresh prompt and Geminin bombed super hard on some basic work. Insisting the problem was X when it wasn't. I don't know. I was super bullish but now I'm feeling far from sold on it.

miltonlost 12 hours ago | parent | prev [-]

Ai is definitely able to sling out more and more lines of code, yes. Whether those LOC are productive...?

chankstein38 12 hours ago | parent [-]

Tomorrow's Calc app will have 30mil lines of code and 1000 npm dependencies!

chasd00 7 hours ago | parent [-]

and 2+2 will output 4 almost all the time.. just like a human would.