Remix.run Logo
jdiff 11 hours ago

The IT industry is also full of salesmen and con men, both enjoy unrealistic exaggeration. Your statements would not be out of place 20 years ago when the iPhone dropped. Your statements would not be out of place 3 years ago before every NFT went to 0. LLMs could hit an unsolvably hard wall next year and settle into a niche of utility. AI could solve a lengthy list of outstanding architectural and technical problems and go full AGI next year.

If we're talking about changing the industry, we should see some clear evidence of that. But despite extensive searching myself and after asking many proponents (feel free to jump in here), I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI. If this is so foundationally groundbreaking, that should be a clear signal. Personally, I would expect to see an explosion of this even if the hype is taken extremely conservatively. But I can't even track down a few solid examples. So far my searching only reveals one-off pull requests that had to be laboriously massaged into acceptability.

bsenftner 11 hours ago | parent | next [-]

> I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI.

That's because using AI to write code is a poor application of LLM AIs. LLMs are better suited to summary, advice, and reflection than forced into a Rube Goldberg Machine. Use your favorite LLM as a Socratic advisor, but not as a coder, and certainly not as an unreliable worker.

eddythompson80 11 hours ago | parent | next [-]

The entire hype for LLMs is that they can *do* anything. Even if only writing code, that could justify their hype. If LLMs mean Grammarly is now a lot better (and offered by big tech) then it’ll be very disappointing (economically speaking)

bsenftner 7 hours ago | parent [-]

I believe it was Flavor Flave that said: "Don't believe the hype", and the pretty much applies to everything humans do.

robwwilliams 11 hours ago | parent | prev | next [-]

I support this comment. AI for coding does still involve much prodding and redirecting in my limited experience. Try getting Claude to produce even a simple SVG for a paper is a struggle in my experience.

But for helping me as a partner in neurophilosophy conversations Claude is unrivaled even compared to my neurophilosophy colleagues—speed and the responsivness is impossible to beat. LLMs are at pushing me to think harder. They provides the wall against which to bounce ideas, and those bounces often come from surprising and helpful angles.

Arch-TK 11 hours ago | parent | prev | next [-]

It's absolutely hilarious reading all these "you're holding it wrong" arguments because every time I find one it contradicts the previous ones.

usrbinbash 10 hours ago | parent | prev | next [-]

> That's because using AI to write code is a poor application of LLM AIs

Then why is that exact usecase being talked about ad nauseam by many many many "influencers", including "big names" in the industry? Why is that exact usecase then advertised by leading companies in the industry?

bsenftner 8 hours ago | parent [-]

It probably has a lot to do with those writing the code are not those marketing the finished software, and between the two groups they do not communicate very well. Once marketing gets any idea momentum going, they will go for it, because they can sell it, and then make engineering pay that bill. Not their problem.

JimmaDaRustla 11 hours ago | parent | prev | next [-]

Agreed. That argument they made was a straw-man which doesn't really pertain to where LLMs are being leveraged today.

jdiff 9 hours ago | parent [-]

The claim that you made was "if you still don't see how LLMs is changing [the IT] industry, you haven't been paying attention." Pointing out that there is no visible evidence of that change in the industry you mentioned and inviting you to provide some where others have repeatedly failed is not attacking a strawman.

komali2 11 hours ago | parent | prev [-]

> Use your favorite LLM as a Socratic advisor

Can you give an example of what you mean by this?

hyperadvanced 11 hours ago | parent [-]

You read the book and have the llm ask you questions to help deepen your understanding, e.g.

aydyn 10 hours ago | parent [-]

Or you dont read the book at all and ask the llm to give you the salient points?

noah_buddy 10 hours ago | parent | next [-]

Socratic method usually refers to a questioning process, which is what the poster above is getting at in their terminology. Imo

teg4n_ 10 hours ago | parent | prev [-]

And cross your fingers it didn’t make them up?

aydyn 10 hours ago | parent [-]

Yes. If it can get me through 10 books in the same time it takes you to get through 1 I am fine with an extra 1% error rate or whatever.

jdiff 9 hours ago | parent [-]

If I spend an afternoon on CliffNotes, I haven't read a hundred books in a day. This isn't one weird trick to accelerate your reading, it's entirely missing the point. If any book could be summarized in a few points, there would be no point to writing anything more than a BuzzFeed listicle.

aydyn 9 hours ago | parent [-]

Lots of books CAN be summarized in a few points.

komali2 an hour ago | parent [-]

Yes, that's true, but bullet point ideology isn't very useful imo.

For example you can easily sum up "How to Win Friends and Influence People" into a few short recommendations. "Empathize with people, listen closely to people, let them do most of the talking, be quick to be upfront and honest with your faults." But it gets tenuous and some of the exact meaning is lost, and the bullet points aren't very convincing on their own of the effectiveness of the recommended methods.

The book fleshes out the advice with clarifications as well as stories from Carnegie's life of times he used the techniques and their results.

I think humans are good at remembering stories. For example I always remember the story about him letting his dog off the leash in a dog park and how he leveraged his technique for getting off the hook when a cop confronted him about it. I think I remember that better than the single bullet point advice of "readily and quickly admit when you're wrong." In fact I think the end of each chapter succinctly states the exact tip and I'm sure I'm misstating it here.

I could use anki to memorize a summary but I don't think I'd be able to as effectively incorporate the techniques into my behavior without the working examples, stories, and evidence he provides.

And that's just non fiction. I can't fathom the point of summarizing a fiction book, whose entire enjoyment comes from the reading of it.

pama 10 hours ago | parent | prev | next [-]

> I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI

This popular repo (35.6k stars) documents the fraction of code written by LLM for each release since about a year ago. The vast majority of releases since version 0.47 (now at 0.85) had the majority of their code written by LLM (average code written by aider per release since then is about 65%.)

https://github.com/Aider-AI/aider

https://github.com/Aider-AI/aider/releases

KerrAvon 10 hours ago | parent [-]

I think we need to move the goalposts to "unrelated to/not in service of AI tooling" to escape easy mode. Replace some core unix command-line tool with something entirely vibecoded. Nightmare level: do a Linux graphics or networking driver (in either Rust or C).

hmry 9 hours ago | parent [-]

Yes, I agree. The same way that when you ask "Are there any production codebases written in Language X", you typically mean "excluding the Language X compiler & tooling itself." Because of course everyone writing a tool loves bootstrapping and dogfooding, but it doesn't tell you anything about production-readiness or usefulness / fitness-for-purpose.

belter 11 hours ago | parent | prev | next [-]

> If we're talking about changing the industry, we should see some clear evidence of that.

That’s a great point...and completely incompatible with my pitch deck. I’m trying to raise a $2B seed round on vibes, buzzwords, and a slightly fine-tuned GPT-3.5.. You are seriously jeopardizing my path to an absurdly oversized yacht :-))

BoiledCabbage 10 hours ago | parent | prev | next [-]

> The IT industry is also full of salesmen and con men, both enjoy unrealistic exaggeration. Your statements would not be out of place 20 years ago when the iPhone dropped. Your statements would not be out of place 3 years ago before every NFT went to 0. LLMs could hit an unsolvably hard wall next year and settle into a niche of utility.

The iPhone and subsequent growth of mobile (and the associated growth of social media which is really only possible in is current form with ubiquitous mobile computing) are evidence it did change everything. Society has been reshaped by mobile/iPhone and its consequences.

NFTs were never anything, and there was never an argument they were. The were a financial speculative item, and it was clear all the hype was due to greater fools and FOMO. To equate those two is silly. That's like arguingsome movie blockbuster like Avengers Endgame was going to "change everything" because it was talked about and advertised. It was always just a single piece of entertainment.

Finally for LLMs, a better comparison for them would have been the 80's AI winter. The question should be "why will this time not be like then?" And the answer is simple: If LLMs and generative AI models never improve an ounce - If they never solve another problem, nor get more efficient, nor get cheaper - they will still drastically change society because they are already good enough today. They are doing so now.

Advertising, software engineering, video making. The tech is already for enough that it is changing all of these fields. The only thing happening now is the time it takes for idea diffusion. People learning new things and applying it are the slow part of the loop.

You could have made your argument pre-chatgpt, and possibly could have made that argument in the window of the following year or two, but at this point the tech at the level to change society exists, it just needs time to spread. All it need are two things: tech stays the same, prices roughly stay the same. (No improvements required)

Now there still is a perfectly valid argument to make against the more extreme claims we hear of: all work being replaced..., and that stuff. And I'm as poorly equipped to predict that future as you (or anyone else) so won't weigh in - but that's not the bar for huge societal change.

The tech is already bigger than the iPhone. I think it is equivalent to social media, (mainly because I think most people still really underestimate how enormous the long term impact of social media will be in society: Politics, mental health, extremism, addiction. All things they existed before but now are "frictionless" to obtain. But that's for some other post...).

The question in my mind is will it be as impactful as the internet? But it doesn't have to be. Anything between social media and internet level of impact is society changing. And the tech today is already there, it just needs time to diffuse into society.

You're looking at Facebook after introducing the algorithm for engagement. It doesn't matter that society wasn't different overnight, the groundwork had been laid.

hobs 11 hours ago | parent | prev | next [-]

One part of the code generation tools is that they devalue code at the same time as produce low quality code (without a lot of human intervention.)

So a project that mostly is maintained by people who care about their problem/code (OSS) would be weird to be "primarily maintained by AI" in a group setting in this stage of the game.

jdiff 9 hours ago | parent [-]

Exactly the problem. It doesn't need to be good enough to work unsupervised in order to gain real adoption. It just needs to be a performance or productivity boost while supervised. It just needs to be able to take an AI-friendly FOSS dev (there are many), and speed them along their way. If we don't have even that, then where is the value (to this use case) that everyone claims it has? How is this going to shake the foundations of the IT industry?

hobs 5 hours ago | parent [-]

Because convincing the owners of the house they have a shaky foundation and you have a cheap fix matters more than the actual integrity and the fix.

There's no question that the predictions around LLMs are shaking up the industry - see mass layoffs and offers for 8 figures to individual contributors. The question is will it materially change things for the better? no idea.

jdiff 4 hours ago | parent [-]

Do you have any more info on the 8 figures? I hadn't come across that, but that's quite interesting to hear.

For the mass layoffs, I was under the belief that those were largely driven by the tax code alterations in the US.

hobs 4 hours ago | parent [-]

https://www.theregister.com/2025/06/13/meta_offers_10m_ai_re... Nah, big companies don't even care about that very much, they have a million tax dodges, its the smaller startups that are deeply impacted by that type of change.

runarberg 11 hours ago | parent | prev | next [-]

This has been researched, and while the existing research is young and inconclusive, the outlook is not so good for the AI industry, or rather for the utility of their product, and the negative effects it has on their users.

https://news.ycombinator.com/item?id=44522772

JimmaDaRustla 11 hours ago | parent | prev | next [-]

> LLMs could hit an unsolvably hard wall next year and settle into a niche of utility

LLMs in their current state have integrated into the workflows for many, many IT roles. They'll never be niche, unless governing bodies come together to kill them.

> I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI

Straw man argument - this is in no way a metric for validating the power of LLMs as a tool for IT roles. Can you not find open source code bases that leverage LLMS because you haven't looked, or because you can't tell the difference between human and LLM code?

> If this is so foundationally groundbreaking, that should be a clear signal.

As I said, you haven't been paying attention.

Denialism - the practice of denying the existence, truth, or validity of something despite proof or strong evidence that it is real, true, or valid

NilMostChill 10 hours ago | parent | next [-]

> LLMs in their current state have integrated into the workflows for many, many IT roles. They'll never be niche, unless governing bodies come together to kill them.

That is an exaggeration, it is integrated into some workflows, usually in a provisional manner while the full implications of such integrations are assessed for viability in the mid to long term.

At least in the fields of which i have first hand knowledge.

> Straw man argument - this is in no way a metric for validating the power of LLMs as a tool for IT roles. Can you not find open source code bases that leverage LLMS because you haven't looked, or because you can't tell the difference between human and LLM code?

Straw man rebuttal, presenting an imaginary position in which this statement is doesn't apply doesn't invalidate the statement as a whole.

> As I said, you haven't been paying attention.

Or alternatively you've been paying attention to a selective subset of your specific industry and have made wide extrapolations based on that.

> Denialism - the practice of denying the existence, truth, or validity of something despite proof or strong evidence that it is real, true, or valid

What's the one where you claim strong proof or evidence while only providing anecdotal "trust me bro" ?

jdiff 9 hours ago | parent | prev | next [-]

> LLMs in their current state have integrated into the workflows for many, many IT roles. They'll never be niche, unless governing bodies come together to kill them.

Having a niche is different from being niche. I also strongly believe you overstate how integrated they are.

> Straw man argument - this is in no way a metric for validating the power of LLMs as a tool for IT roles. Can you not find open source code bases that leverage LLMS because you haven't looked, or because you can't tell the difference between human and LLM code?

As mentioned, I have looked. I told you what I found when I looked. And I've invited others to look. I also invited you. This is not a straw man argument, it's making a prediction to test a hypothesis and collecting evidence. I know I am not all seeing, which is why I welcome you to direct my eyes. With how strong your claims and convictions are, it should be easy.

Again: You claim that AI is such a productivity boost that it will rock the IT industry to its foundations. We cannot cast our gaze on closed source code, but there are many open source devs who are AI-friendly. If AI truly is a productivity boost, some of them should be maintaining widely-used production code in order to take advantage of that.

If you're too busy to do anything but discuss, I would instead invite you to point out where my reasoning goes so horrendously off track that such examples are apparently so difficult to locate, not just for me, but for others. If one existed, I would additionally expect that it would be held up as an example and become widely known for it with as often as this question gets asked. But the world's full of unexpected complexities, if there's something that's holding AI back from seeing adoption reflected in the way I predict, that's also interesting and worth discussion.

dingnuts 11 hours ago | parent | prev [-]

> Can you not find open source code bases that leverage LLMS because you haven't looked, or because you can't tell the difference between human and LLM code?

The money and the burden of proof are on the side of the pushers. If LLM code is as good as you say it is, we won't be able to tell that it's merged. So, you need to show us lots of examples of real world LLM code that we know is generated, a priori, to compare

So far most of us have seen ONE example, and it was that OAuth experiment from Cloudflare. Do you have more examples? Who pays your bills?

cdelsolar 10 hours ago | parent [-]

What are you talking about? I have multiple open-source projects where I've generated multiple PRs with 90+% AI tools. I don't care that the code isn't as good, because I have people using these features and the features work.

1) https://github.com/domino14/Webolith/pull/523/files (Yes, the CSS file sucks. I tried multiple times to add dark mode to this legacy app and I wasn't able to. This works, and is fine, and people are using it, and I'm not going to touch it again for a while)

2) https://github.com/domino14/macondo/pull/399 - A neural net for playing Scrabble. Has not been done before, in at least an open-source way, and this is a full-fledged CNN built using techniques from Alpha Zero, and almost entirely generated by ChatGPT o3. I have no idea how to do it myself. I've gotten the net to win 52.6% of its games against a purely static bot, which is a big edge (trust me) and it will continue to increase as I train it on better data. And that is before I use it as an actual evaluator for a Monte Carlo bot.

I would _never_ have been able to put this together in 1-2 weeks when I am still working during the day. I would have had to take NN classes / read books / try many different network topologies and probably fail and give up. Would have taken months of full-time work.

3) https://github.com/woogles-io/liwords/pull/1498/files - simple, but one of many bug fixes that was diagnosed and fixed largely by an AI model.

ModernMech 8 hours ago | parent [-]

I think this is what the original poster means. The value proposition isn't "As a developer, AI will allow you to unlock powers you didn't have before and make your life easier". They're selling it as "AI can do you job."

We are being sold this idea that AI is able to replace developers, wholesale. But where are the examples? Seemingly, every example proffered is "Here's my personal project that I've been building with AI code assistants". But where are the projects built by AI developers (i.e. not people developers)? If AI was as good as they say, there should be some evidence of AI being able to build projects like this.

ToucanLoucan 10 hours ago | parent | prev | next [-]

> The IT industry is also full of salesmen and con men, both enjoy unrealistic exaggeration. Your statements would not be out of place 20 years ago when the iPhone dropped. Your statements would not be out of place 3 years ago before every NFT went to 0. LLMs could hit an unsolvably hard wall next year and settle into a niche of utility.

Not only is that a could, I'd argue they already are. The huge new "premier" models are barely any better than the big ticket ones that really kicked the hype into overdrive.

* Using them as a rubber duck that provides suggestions back for IT problems and coding is huge, I will fully cosign that, but it is not even remotely worth what OpenAI is valued at or would need to charge for it to make it profitable, let alone pay off it's catastrophic debt. Meanwhile every other application is a hard meh.

* The AI generated video ads just look like shit and I'm sorry, call me a luddite if you will, but I just think objectively less of companies that leverage AI video/voices/writing in their advertisements. It looks cheap, in the same way dollar store products have generic, crappy packaging, and makes me less willing to open my wallet. That said I won't be shocked at all if that sticks around and bolsters valuations, because tons of companies worldwide have been racing to the bottom for decades now.

* My employer has had a hard NO AI policy for both vetting candidates and communicating with them for our human resources contracting and we've fired one who wouldn't comply. It just doesn't work, we can tell when they're using bots to review resumes because applicants get notably, measurably worse.

LLMs are powerful tools that have a place, but there is no fucking UNIVERSE where they are the next iPhone that silicon valley is utterly desperate for. They just aren't.

JimmaDaRustla 11 hours ago | parent | prev | next [-]

> I can't find a single open source codebase, actively used in production, and primarily maintained and developed with AI.

As I stated, you haven't been paying attention.

mcherm 11 hours ago | parent | next [-]

A better-faith response would be to point out an example of such an open source codebase OR tell why that specific set of restrictions (open-source, active production, primarily AI) is unrealistic.

For instance, one might point out that the tools for really GOOD AI code authoring have only been available for about 6 months so it is unreasonable to expect that a new project built primarily using such tools has already reached the level of maturity to be relied on in production.

JimmaDaRustla 11 hours ago | parent | next [-]

I don't have time to handhold the ignorant.

I do however have time to put forth my arguments now that I use LLMs to make my job easier - if it weren't for them, I wouldn't be here right now.

eddythompson80 11 hours ago | parent | next [-]

You don’t have time to post a link with an example. You have time to post a wall of text instead.

JimmaDaRustla 11 hours ago | parent [-]

My code isn't open source.

eddythompson80 11 hours ago | parent [-]

Checkmate

JimmaDaRustla 10 hours ago | parent [-]

You didn't checkmate anything.

You're perfectly capable of looking at the world around you. You're arguing in bad faith using a false dichotomy that I must be able to produce examples or my argument is not valid. You're trying to suck all the air out of the room and waste time.

https://tools.simonwillison.net/

ChECk MaTee

slacktivism123 10 hours ago | parent [-]

Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

https://news.ycombinator.com/newsguidelines.html

Aside from that, I don't see how the collection of simple one-shot JavaScript wrappers (like "Extract URLs", "Word Counter", and "Pomodoro Timer") that you keep bringing up is related to your argument.

dminik 10 hours ago | parent | prev [-]

One would think that with all of the time AI is clearly saving you, you could spare some of it for us uneducated peasants.

eddythompson80 an hour ago | parent [-]

They are clearly busy dunking on us all.

eddythompson80 11 hours ago | parent | prev [-]

“you haven’t been listening. It’s inevitable to happen”

mashygpig 11 hours ago | parent | prev | next [-]

I don’t find it fair that you point out straw man in your parent comment and then use ad hominem in this comment. I would love to see you post some examples. I think you’d have a chance of persuading several readers to at least be more open minded.

hinkley 11 hours ago | parent | prev | next [-]

> But despite extensive searching myself and after asking many proponents

11 hours ago | parent | prev | next [-]
[deleted]
nkrisc 11 hours ago | parent | prev | next [-]

So… which ones?

JimmaDaRustla 11 hours ago | parent [-]

Mine, it's how I have time to argue with the denialists right now.

eddythompson80 10 hours ago | parent [-]

Nice sales pitch.

belter 11 hours ago | parent | prev [-]

At least we know you are human, since you are gaslighting us instead of citing a random link, that leads to a 404 page. An LLM would have confidently hallucinated a broken reference by now.

Cthulhu_ 11 hours ago | parent [-]

Not necessarily, one aspect of the LLM arms race is to have the most up-to-date records or to use a search engine to find stuff.

jdiff 9 hours ago | parent [-]

If the LLM thinks to consult a search engine. If the next token predicted is the start of a link rather than the start of a tool call, it's going to be spitting out a link. Getting them to reliably use tools rather than freeball seems to be quite a difficult problem to solve.

4star3star 11 hours ago | parent | prev [-]

At the heart of it all is language. Logic gates to assembly to high level programming languages are a progression of turning human language into computed processes. LLMs need to be tuned to recognize ambiguity of intention in human language instructions, following up with clarifying questions. Perhaps quantum computing will facilitate the process, the AI holding many fuzzy possibilities simultaneously, seeking to "collapse" them into discrete pathways by asking for more input from a human.