Remix.run Logo
jyrkesh 10 hours ago

I know this snarky, I'm sorry ahead of time. But I don't know how else to make this point...

The fact that the people running r/progamming don't know not to wait until April 2 to publish this tells me that they don't have real-world experience in shipping software in a business environment.

We are SO past the point of software being developed without LLMs at _all_, the trend line is never going to reverse. I don't understand the people digging in as zero LLM absolutists.

balgg 10 hours ago | parent | next [-]

I use LLMs yet I don't care to read about them or their usage at all. I can certainly see the reason why a place called "/r/programming" wouldn't want to have discussion about agent usage either, since it's not programming, it's a different activity.

petterroea 9 hours ago | parent | next [-]

Yeah I totally get the rule. I use LLMs when developing. In fact, I've been out of Claude tokens for the week since Wednesday, but I use Claude specifically for the boring, simple stuff I don't really want to do, but that Claude can. I'm simply not interested in discussing anything LLMs are able to do, it's not interesting.

It makes sense that a programming subreddit first and foremost discusses programming (the skill). We can go complain about Claude somewhere else if we want to.

petterroea 9 hours ago | parent [-]

Following up, anecdotally, people I talk to who are excited about LLM development usually either care more about product development, or don't have programming skill enough to see how bad the software is. Nothing wrong with either, but it can get tiresome.

balgg 9 hours ago | parent [-]

> people I talk to who are excited about LLM development usually either care more about product development

This is an interesting thing I've also noticed in public hobbyist forums/discussion spaces where someone who is more interested in making a "product" clashes with people who are just there to talk about the activity itself. It's unfortunate that it happens but it will self-correct over time (like /r/programming here) and the LLM enthusiasts of Reddit will find another place to discuss ways of using them.

singularity2001 8 hours ago | parent | prev [-]

If you use them, you should care because there is always something one can learn about things we use.

austin-cheney 7 hours ago | parent | prev | next [-]

This seems to be a matter of industry and a perspective, like saying everyone loves chocolate.

https://en.wikipedia.org/wiki/Argumentum_ad_populum

I have no reason to believe AI is used as much or as widely as you claim.

In my industry AI is fully available and was almost forced on us, and yet nobody is using it. The process of using AI and then scrutinizing the output is just more work than doing it manually. The most I encounter AI is when running job interviews and watching candidates read AI generated answers off a screen.

My industry also tends to skew much older than where I came from previously writing JavaScript full time. We are also fully remote with lots of status meetings. If I were less confident in my ability to communicate in writing maybe I would be more inclined to use AI.

I also don’t really envision AI accomplishing my multitude of daily managerial and administrative assignments that I have in addition to agile stories and writing code. Comparatively, writing code is the trivial part.

Mashimo 10 hours ago | parent | prev | next [-]

I think they just don't want every post to be about llm, vibe coding, harness and if claude is down.

Some sub reddits forbid memes, because else they get flooded and the good content drowns in it.

Some sub reddits only allow certain content of certain days to counter this.

What do you want to mods to do?

csin 9 hours ago | parent | prev | next [-]

It may not be a, in denial, hiding their heads in the sand situation.

Sometimes a topic gets too popular, it drowns out all the other topics. At that point, aren't they just a glorified version of r/llm?

I'll give you one personal example:

The year Caitlin Clark was drafted to the wnba.

r/wnba went from a subreddit of 9000, to eventually 200k subs.

We were bombarded with CC posts every hour.

- Some of it was trolls staging a race war (this was during US elections).

- Some of it was genuine CC fans, who wanted to talk about CC.

- Some of it was bball nerds, who you know... wanted to talk about a bball player in a bball forum (regardless of who that bball player happens to be).

So what happened was, at any given day, 80% of the front page was CC content.

At that point, we might as well have been r/caitlinclark.

So the mods did something drastic and controversial. They banned all "low effort" CC content.

WTF does "low effort" mean? It pretty much meant 99% of CC posts got removed.

The forum went back to something that resembled a bball forum. That talked about other players. And other teams. Not just Caitlin Clark.

tarsinge 3 hours ago | parent | prev | next [-]

> We are SO past the point of software being developed without LLMs at _all_

That's exactly why I've given up on programming, development or career subreddits. There are a lot of interesting software engineering challenges opening up, but instead of discussing it like professionals it all gets drowned in a big negative mixture of rants against the financial AI bubble, companies using AI as an excuse to lay off, and a general antiwork vibe. All these subreddits have become feel good/bad echo chambers for angry teens and students with no real world professional experience.

minebreaker 9 hours ago | parent | prev | next [-]

> The fact that the people running r/progamming don't know not to wait until April 2 to publish this tells me that they don't have real-world experience in shipping software in a business environment.

I can't tell this is a joke or not. Do people really care? Or maybe the U.S. thing? At least in my country nobody cares....

tovej 10 hours ago | parent | prev | next [-]

I have yet to run into any serious project in the wild that is using LLMs for development. I have seen vibecoded intern prototypes that took half a day to vet and dismiss because they were completely useless.

I'm sure your experience is different, but you can't _seriously_ claim we're "past the point" of not using LLMs for programming.

Vibecoding is a fundamentally different kind of activity than actual programming. It's a pure delusional dopamine rush, compared to the deliberate engineering required to build quality software.

tarsinge 3 hours ago | parent | next [-]

How can you still can't distinguish between using LLMs as tools and a non technical person vibe coding? I have yet to run into any serious software engineer that had to dive into a legacy codebase or an unknown tech stack and found no value in e.g. Claude Code for general understanding and refactoring. Not even talking about coding, just the capacity of generating custom contextualised documentation and examples tailored to your constraints and skills on the fly is ridiculously helpful.

tovej 2 hours ago | parent [-]

The tool being useful sometimes does not support the statement that we are "past the point" of not using LLMs.

apsurd 10 hours ago | parent | prev | next [-]

For CRUD apps though, the intern closing the ticket literally 30 minutes after it's created is really hard to battle against. Especially when those tickets were created by suits.

I generally agree that while I think vibe-coding is here to stay, it's different from designing useful products and systems, and I don't know how to convince colleagues that we should uhh be careful about all this code we're pushing. I fear all they see is the guy aging out.

xboxnolifes 8 hours ago | parent | prev | next [-]

What do you consider "serious", as that seems to be the main differentiator here. I know plenty of serious (multiple years of development and users, and began prior to LLMs) projects that have devs using LLMs for development.

tovej 2 hours ago | parent [-]

That makes me curious, would you like to list them?

xboxnolifes 25 minutes ago | parent [-]

They aren't anything particularly special, most projects I interact with have at least 1 dev who uses LLMs in some capacity.

The two I was thinking of when I posted are the Minecraft modpack GregTech: New Horizons, and an Old School Runescape plugin.

https://github.com/gtnewhorizons

https://github.com/osrs-reldo/tasks-tracker-plugin

dwb 9 hours ago | parent | prev | next [-]

Ok well I have plenty of serious, production-level professional experience that says otherwise. Not “vibe coding” - we certainly review the code. It’s a tool that has downsides and failure modes, of course, but it’s at the point where it’s definitely speeding us up and we are using it a lot. Trust me, I’d prefer a world, on balance, where this wasn’t true – I don’t like many of the aspects and uses of the technology – but its utility in programming is undeniable now and the capitalists aren’t taking “no” for an answer.

NietTim 9 hours ago | parent | prev | next [-]

> I have seen vibecoded intern prototypes that took half a day to vet and dismiss because they were completely useless.

They weren't useless, they proved if the direction that the prototype was exploring was worthwhile. I've personally made many completely shit code prototypes in the years before we had LLM's, of course they weren't magically production ready, that's not the point of a prototype.

tovej 2 hours ago | parent [-]

The example I'm thinking of did not. It was just the dumbest way to execute an idea we knew was easy to execute (scanning a parameter space).

altmanaltman 9 hours ago | parent | prev | next [-]

> I have yet to run into any serious project in the wild that is using LLMs for development.

How about Claude Code? 100% of it was vibe-coded according to its creator.[1] Google and Microsoft also claim a lot of their internal code is AI-generated now. [2] [3]

Naturally, none of the big tech companies will just release a pure vibe-coded project due to structural reasons, but you also _seriously_ can't claim that serious projects don't use LLMs as well these days. Maybe in your limited experience, it isn't true, but that doesn't generalize to what's actually happening.

1. https://www.reddit.com/r/Anthropic/comments/1pzi9hm/claude_c...

2. https://fortune.com/2024/10/30/googles-code-ai-sundar-pichai...

3. https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...

59nadir 8 hours ago | parent | next [-]

> How about Claude Code? 100% of it was vibe-coded according to its creator.

Agents are trivial to make. I don't know whether that means they're not "serious", but it's exactly the type of thing you can make yourself in a very short time, and exactly the type of thing even LLMs can't fuck up too bad.

With regards to the overall point, I think the existence of projects using LLMs to do development doesn't really lend credence to the idea that they're somehow preferable or desirable. People tend to use hyped things, imagine they're useful even when presented with evidence to the contrary, and generally be very resistant to sobering realities.

It took years before people stopped running hadoop clusters to do things that a single linux box could get done 10x faster with some basic pipes. I'm sure there are still people who have "serverless backends" that work terribly in every regard in comparison to literally just a linux VM somewhere. People in software development tend to find these types of things every once in a while and adopt them wholesale.

This cycle is helped by the fact that the field has been growing constantly and a lot of the adoption comes from kids who don't know any better. Every piece of shit technology that comes and goes has meat for the grinder coming straight out of university.

Would I put LLMs in the same category as these previous (nearly) useless things? Probably not... But you should never trust peoples' perception of usefulness when it comes to almost anything in software development.

Eezee 8 hours ago | parent | prev [-]

I don't know if Claude Code is actually a great example. If you have used it for longer periods of time, you will have noticed how insanely buggy it is. And for every bug that they finally fix, there seems to be a new one introduced. I don't even mind vibecoding. I have vibecoded a couple of tools that me and my coworkers use every day to make our lives easier but I'm not going to pretend they are anywhere close to something that I would release to the public.

ok_dad 10 hours ago | parent | prev [-]

It’s juvenile to consider all LLM assisted coding as vibecoding. I’m not going to expand here because this topic is about as much fun to discuss as politics, but coding assistant tools are just tools.

If you give a regular person a race car, they will crash it about as fast as their vibecoded app crashes. Give the same race car to a pro age it’s a different story.

I still think this was the right decision by the programming mods there. Talking about tools is pretty boring, and you need to train to use something like an LLM assistant. No one who can’t program a language should be using an LLM to learn it unless they know about 2-3 other languages already, IMO.

apsurd 9 hours ago | parent [-]

Nah I think it really is more nuanced than that. It is true that a non-technical person's vibe-coded side-hustle is completely different than how a professional developer may ship genAI code, but we're willfully glossing over the real problem that professionals are pushing out TONS of genAI code that's closer to vibes than it is to the pre-AI expectations on pushing to prod.

ModernMech 4 hours ago | parent | prev | next [-]

lol yeah the people running the programming subreddit don't understand social context, who could have guessed?

btheunissen 10 hours ago | parent | prev | next [-]

I hate AI video, I hate AI art, but if you are pretending that AI isn’t going be writing code for 99% of projects going forward you are absolutely kidding yourself.

duskdozer 9 hours ago | parent | next [-]

AI video and art is going to be increasingly used in advertising, news/reporting, games, etc. Therefore, you aren't allowed to hate it or even complain about it. Right?

hperrin 9 hours ago | parent | prev [-]

AI will be writing the code for shit-slop apps and libraries. The good ones will be written by humans.

vanrohan 9 hours ago | parent | prev | next [-]

I know LLM generated code comes with it's own challenges, but the absolutists are definitely clinging to a time that has passed. I saw a recent discussion on Immich where a maintainer flatout denied a PR saying "That diff looks LLM-generated to me; is that indeed the case? If so, we'd prefer not to receive a PR for it" The PR was from a professional software engineer, who worked weeks of his free time on a big feature. Well structured + tested. Dismissed just because AI was used. https://github.com/immich-app/immich/discussions/23745#discu...

59nadir 9 hours ago | parent [-]

"Well structured + tested". Who would know? The diff is almost 200k changed lines. Good on them for saying no to this nonsense.

There's a good chance the actual needed implementation is less than 20k lines (I've found that LLM bloat grows exponentially), but even that's a stretch to review and accept wholesale.

Deeds67 8 hours ago | parent [-]

I'm the person working on that fork. Yes, it has now diverged 200k+ lines, but half of that is specs, research and documentation and includes a month worth of work.

The comment in question was a small feature of about 1.5k lines changed and it was solidly tested.

59nadir 8 hours ago | parent [-]

Eh, fair enough. 1.5k is reasonable. Have you tried just writing it yourself instead? It's likely it'll be less than 1k lines and you should have no problems writing an implementation yourself if you understand the structure of the LLM version.

Deeds67 8 hours ago | parent [-]

[dead]

59nadir 6 hours ago | parent [-]

Heh, fair enough. To me this comes off as "I'm unable to write it myself [possibly because I've outsourced my thinking too much]", to be honest, but I'm not going to argue; you're the one who presumably wants this code to end up in that repository.

I wouldn't really consider (what is likely) sub-1kloc a "large feature", but to each their own.

Deeds67 6 hours ago | parent [-]

I don't want it to end up in that repo anymore, hence the fork. I've got a growing community of people who have been eagerly awaiting this feature and a ton more that I built.

I definitely could write this by hand - the stuff I built in the last 10 years before LLMs was more complex than this - but theres no way I'm spending all my free time to slowly craft something if I can just use AI and get the same results much faster

klustregrif 10 hours ago | parent | prev [-]

> I don't understand the people digging in as zero LLM absolutists.

Relevant read: https://en.wikipedia.org/wiki/Luddite

I feel like it’s easy to understand what’s motivating these individuals to take that stance.

yoz-y 10 hours ago | parent [-]

Definitely not the same. Luddites were fighting for humane working conditions; breaking machines was just a means to an end. They weren’t doing it because machines were the problem.

Anti AI crowd on the other hand just doesn’t like AI. A modern equivalent of a Luddite would be someone going on strike to protest firings.

southerntofu 9 hours ago | parent [-]

You are being overly dismissive of a mindset you obviously don't understand. Of course being anti-AI is about decent living conditions for humans. Most of us don't believe in singularity or Matrix-style threats.

But current AI is actively destroying our breathable/livable planet by drawing unmatched quantities of resources (see also DRAM shortage, etc), all the while exploiting millions of non-union workers across the world (for classification/transcription/review), and all this for two goals:

1) try to replace human labor: problem is we know any extracted value (if at all) will benefit the bourgeoisie and will never be redistributed to the masses, because that's exactly what happened with the previous industrial revolutions (Asimov-style socialism is not exactly around the corner)

2) try to surveil everyone with cameras and microphones everywhere, and build armed (semi-)autonomous robots to guard our bourgeois masters and their data centers

There is nothing in this entire project that can be interpreted to benefit the workers. People opposing AI are just lucid about who that's benefiting, and in that sense the luddite comparison is very appropriate.

yoz-y 9 hours ago | parent [-]

You have misinterpreted my comment. But I concede that I should have written it more clearly.

I divide anti-AI people into two groups. Those who don’t like AI because of what it is, and those who don’t like it because of its impact on society. Naturally there is an overlap.

Luddites were not opposed to the technology. So the comparison to them is only correct for the latter group.

Not talking about LLMs on a forum is not going to change anything in the grand scheme of things. It could be a protest, but I see it more (the feeling I get from the announcement) as a means to protect the forum from being overrun regardless whether AI is ultimately good or bad.

Also note that nowhere in my comment I have stated my position in this argument.

southerntofu 7 hours ago | parent [-]

Sorry for misinterpreting your original comment.

I'm not really convinced there's people who don't like AI "because of what it is". I mean, because of what it is, beyond any social/political considerations.

The only case i know of that is when there was an open letter with Sam Altman and other AI investors calling out the existential danger of AI, which in my view was a way to divert the debate from political questions to hypothetical Matrix/Terminator questions about consciousness and singularity.

em-bee 6 hours ago | parent [-]

really? is it so hard to believe that people dislike AI because it is unreliable, can't be trusted, changes how we work with code, takes the fun out of coding?

i am not worried about social consequences. society can adapt.

i am also not worried about energy use. we have endless clean energy if we can figure out how to use it.

yes, i am worried about society choosing the wrong adaption. that is, i believe we should train everyone to be teachers, doctors scientists, and artists. the stuff that AI should not be doing. but i am not worried about using AI for automation, putting people out of jobs. if we give them the opportunity to learn new jobs and,

IF, AND ONLY IF, we get AI to do it's work with 100% reliability and accuracy.

only then AI will be useful. i have tons of software projects that i'd like to get done. but i can't trust AI to do them for me, because i would spend even more time to verify the results than i would to code it myself.

so yeah, i absolutely don't like AI for what it is, a tool with limited uses that requires me to work in a way i don't want, if i want to benefit from it.