Remix.run Logo
mjaquilina 3 hours ago

I'm not convinced that PyPI is the right metric to use to answer this question. Some (admittedly anecdotal) observations:

1) I'm a former SWE in a business role at a small-market publishing company. I've used Claude Code to automate boring processes that previously consumed weeks of our ops and finance teams' time per year. These aren't technically advanced, but previously would have required in-house dev talent that would not have been within reach of small businesses. I wouldn't have had the time to code these things on my own, but with AI assistance the time investment is greatly reduced (and mostly focused on QA). The only needle moved here is on a private Github repo, but it's real shipped code with small but direct impact.

2) I used to often find myself writing simple Perl wrappers to various APIs for personal or work use. I'd submit these to CPAN (Perl's equivalent to PyPI) in case anyone else could use them to save the 30-60 minutes of work involved. These days I don't bother -- most AI tools can build these in a matter of seconds; publishing them to CPAN or even Github now feels like unnecessary cruft, especially when they're likely to go without active maintenance. So, my LOC published to public repos is down, even though the amount of software produced is the same. It's just that some of that software has become less useful to the world writ large.

3) The code that's possible to ship quickly with pure AI (vibe coding) is by definition not the kind of reusable code you'd want to distribute on PyPI. So, I'd expect that any productivity impact from AI on OSS that's designed to be reusable would be come very slowly, versus "hockey stick" impact.