Remix.run Logo
gitremote 17 hours ago

Software development jobs must be very diverse if even this anti-vibe-coding guy thinks AI coding definitely makes developers more productive.

In my work, the bigger bottleneck to productivity is that very few people can correctly articulate requirements. I work in backend, API development, which is completely different from fullstack development with backend development. If you ask PMs about backend requirements, they will dodge you, and if you ask front-end or web developers, they are waiting for you to provide them the API. The hardest part is understanding the requirements. It's not because of illiteracy. It's because software development is a lot more than coding and requires critical thinking to discover the requirements.

omnicognate 16 hours ago | parent | next [-]

> Software development jobs must be very diverse if even this anti-vibe-coding guy thinks AI coding definitely makes developers more productive.

As a Professor of English who teaches programming to humanities students, the writer has had an extremely interesting and unusual academic career [1]. He sounds awesome, but I think it's fair to suggest he may not have much experience of large scale commercial software development or be particularly well placed to predict what will or will not work in that environment. (Not that he necessarily claims to, but it's implicit in strong predictions about what the "future of programming" will be.)

[1] https://stephenramsay.net/about/

godelski 14 hours ago | parent | next [-]

Hard to say but to back his claim that he was programming since the 90's his CV shows he was working on stuff that's clearly more than your basic undergraduate skill level since the early 2000's. I'd be willing to bet he has more years under his belt than most HN users. I mean I'm considered old here, in my mid 30's, and this guy has been programming most my life. Though that doesn't explicitly imply experience, or more specifically experience in what.

That said, I think people really under appreciate how diverse programmers actually are. I started in physics and came over when I went to grad school. While I wouldn't expect a physicist to do super well on leetcode problems I've seen those same people write incredible code that's optimized for HPC systems and they're really good at tracing bottlenecks (it's a skill that translates from physics really really well). Hell, the best programmer I've ever met got that way because he was doing his PhD in mechanical engineering. He's practically the leading expert in data streaming for HPC systems and gained this skill because he needed more performance for his other work.

There's a lot of different types of programmers out there but I think it's too easy to think the field is narrow.

mikewarot 12 hours ago | parent | next [-]

>I'm considered old here, in my mid 30's

I'm 62, and I'm not old yet, you're just a kid. ;-)

Seriously, there are some folks here who started on punch cards and/or paper tape in the 1960s.

wombatpm 8 hours ago | parent | next [-]

I played with punch cards and polystyrene test samples from the Standard Oil Refinery where my father worked in the early 70’s and my first language after basic was Fortran 77. Not old either.

freeopinion 9 hours ago | parent | prev | next [-]

30 years ago my coworkers called me Grandpa, so I get it both ways.

godelski 11 hours ago | parent | prev [-]

Thanks. I meant is more of in a joking way, poking fun at the community. I know I'm far too young to earn a gray beard, but I hope to in the next 20-30 years ;-) I still got a lot to learn till that happens

Aeolun 11 hours ago | parent [-]

You wish, that gray beard sometimes appears in your late thirties.

godelski 10 hours ago | parent [-]

Maybe. But also what I though was a gray beard in my early 20's is very different from what I think a gray beard is now. The number of those I've considered wizards decreased, and I think this should be true for most people. It's harder to differentiate experts as a novice, but as you get closer the resolution increases.

jader201 5 hours ago | parent [-]

The more I know, the more I know I don’t know.

popcorncowboy 4 hours ago | parent [-]

...and the more I know you don't know. [On the disappearance of wizards as you age]

godelski an hour ago | parent [-]

Both definitely contribute. But at the same time the people who stay wizards (and the people you realize are wizards but didn't previously) only appear to be more magical than ever.

Some magic tricks are unimpressive when you know how they are done. But that's not true for all of them. Some of them only become more and more impressive, only truly being able to be appreciated by other masters. The best magic tricks don't just impress an audience, they impress an audience of magicians.

pjmlp 23 minutes ago | parent | prev | next [-]

My first home computer was bought in 1986, before that the only electronics at home were Game & Watch handhelds, like Manhole.

I guess I am reaching Gandalf status then. :)

anthk 2 hours ago | parent | prev | next [-]

38 there. If you didn't suffer Win9x's 'stability', then editing X11 config files by hand, getting mad with ALSA/Dmix, writing new ad-hoc drivers for weird BTTV tuners reusing old known ones for $WEIRDBRAND, you didn't live.

groovy2shoes 14 minutes ago | parent [-]

the anxiety that i might fry my monitor by setting the wrong scan rate haunts me to this day

AceJohnny2 13 hours ago | parent | prev | next [-]

> I mean I'm considered old here, in my mid 30's

sigh

bojo 12 hours ago | parent | next [-]

I feel like a grandpa after reading that comment now.

jjgreen 12 hours ago | parent | prev | next [-]

I got a coat older than that (and in decent nick).

LgWoodenBadger 11 hours ago | parent [-]

I used to tell the “kids” that I worked with that I have a bowling ball older than them.

wombatpm 8 hours ago | parent | next [-]

I was greeted with blank stares by the kids on my team when they wanted to rewrite an existing program from scratch, and I said that will work for as well as it did with Netscape. Dang whippersnappers

anthk 2 hours ago | parent | prev [-]

I own 90's comic books and video games older than most Gen-Z users in HN.

godelski 12 hours ago | parent | prev [-]

But am I wrong? I am joking, but good jokes have an element of truth...

omnicognate 11 hours ago | parent | next [-]

Depends what you mean by "old". If you mean elderly then obviously you're not. If you mean "past it" then it might reassure you to know the average expecting mother is in her 30s now (in the UK). Even if you just mean "grown up", recent research [1] on brain development identifies adolescence as typically extending into the early thirties, with (brain) adulthood running from there to the mid sixties before even then only entering the "early aging" stage.

For my part, I'm a lot older than you and don't consider myself old. Indeed, I think prematurely thinking of yourself as old can be a pretty bad mistake, health-wise.

[1] https://www.nature.com/articles/s41467-025-65974-8

godelski 10 hours ago | parent [-]

FWIW I doubt I'd consider you old were I to know your actual age. I still think I'm quite young

AceJohnny2 9 hours ago | parent [-]

"inside every old person there is a young one wondering what happened."

xupybd 7 hours ago | parent | prev | next [-]

I assume you're on the younger end

godelski 7 hours ago | parent [-]

No need to assume, I already told everyone my age

AceJohnny2 11 hours ago | parent | prev [-]

It'd be interesting the know the median age of HN commenters.

I guess the median age of YCombinator cohorts is <30 ?

7 hours ago | parent | prev [-]
[deleted]
assimpleaspossi 11 hours ago | parent | prev | next [-]

>As a Professor of English who teaches programming to humanities students

That is the strangest thing I've heard today.

jaimie 10 hours ago | parent [-]

The world of the Digital Humanities is a lot of fun (and one I've been a part of, teaching programming to Historians and Philosophers of Science!) It uses computation to provide new types of evidence for historical or rhetorical arguments and data-driven critiques. There's an art to it as well, showing evidence for things like multiple interpretations of a text through the stochasticity of various text extraction models.

From the author's about page:

> I discovered digital humanities (“humanities computing,” as it was then called) while I was a graduate student at the University of Virginia in the mid-nineties. I found the whole thing very exciting, but felt that before I could get on to things like computational text analysis and other kinds of humanistic geekery, I needed to work through a set of thorny philosophical problems. Is there such a thing as “algorithmic” literary criticism? Is there a distinct, humanistic form of visualization that differs from its scientific counterpart? What does it mean to “read” a text with a machine? Computational analysis of the human record seems to imply a different conception of hermeneutics, but what is that new conception?

https://stephenramsay.net/about/

ykonstant 3 minutes ago | parent [-]

This is fascinating.

moron4hire 11 hours ago | parent | prev | next [-]

That was such a strange aspect. If you will excuse my use of the tortured analogy of comparing programming to wood working, there are is a lot of talk about hand tools versus power tools, but for people who aren't in a production capacity--not making cabinets for a living, not making furniture for a living--you see people choosing to exclusively use hand tools because they just enjoy it more. There isn't pressure about "you most use power tools or else you're in self-denial about their superiority." Well , at least for people who actually practice the hobby. You'll find plenty of armchair woodworkers in the comments section on YouTube. But I digress. For someone who claims to enjoy programming for the sake of programming, it was a very strange statement to make about coding.

I very much enjoy the act of programming, but I'm also a professional software developer. Incidentally, I've almost always worked in fields where subtly wrong answers could get someone hurt or killed. I just can't imagine either giving up my joy in the former case or abdicating my responsibility to understand my code in the latter.

And this is why the wood working analogy falls down. The scale at which damage can occur due to the decision to use power tools over hand tools is, for most practical purposes, limited to just myself. With computers, we can share our fuck ups with the whole world.

Kostchei 12 minutes ago | parent | next [-]

so what you are saying is that for production we should use AI, and hand code for hobby, got it. Lemme log back into the vpn and set the agents on the Enterprise monorepo /jk

unsungNovelty 10 hours ago | parent | prev [-]

Nicely put. The wood working analogy does work.

ngc248 2 hours ago | parent | prev [-]

Exactly, I don't think ppl understand why programming languages even came about anymore. Lotsa ppl don't understand why a natural language is not suitable for programming and by extension prompting an LLM

giancarlostoro 17 hours ago | parent | prev | next [-]

I have done both strict back-end, strict front-end, full stack, QA automation and some devops as well, I worked in an all Linux shop where we were encouraged by great senior devs to always strive for better software all around. I think you're right, it mostly depends on your mindset and how much you expose yourself to the craft. I can tackle obscure front-end things sometimes better than back-end issues despite hating front-end but knowing enough to be dangerous. (My first job in tech really had me doing everything imaginable)

I find the LLMs boost my productivity because I've always had a sort of architectural mindset, I love looking up projects that solve specific problems and keeping them on the back of my mind, turns out I was building myself up for instructing LLMs on how to build me software, and it takes several months worth of effort and spits it out in a few hours.

Speaking of vibe coding in archaic languages, I'm using LLMs to understand old Shockwave Lingo to translate it to a more modern language, so I can rebuild a legacy game in a modern language. Maybe once I spin up my blog again I'll start documenting that fun journey.

badRNG 15 hours ago | parent | next [-]

> Speaking of vibe coding in archaic languages

Well, I think we can say C is archaic when most developers write in something that for one isn't C, two isn't a language itself written in C, or three isn't running on something written in C :)

kgeist 8 hours ago | parent | next [-]

If we take the most popular programming languages and look at what their reference (or most popular) implementations are written in, then we get:

  C++: JavaScript (V8), Java, C#

  C: Python, PHP, Lua, Ruby

  Self-hosted: Go, Rust
Far from archaic indeed. We're still living in the C/C++ world.
pjmlp 15 minutes ago | parent | next [-]

Java and C# compilers are selfhosted.

Then depending on which JVM implementation we are talking about the actual JVM runtime can be Java, C, or C++, or a mix of them.

Modern C compilers are written in C++.

Rust uses LLVM, written in C++.

tmtvl 8 hours ago | parent | prev [-]

I thought Rust still used LLVM (a C++ project) for the backend, did they already switch to Cranelift?

pjmlp 14 minutes ago | parent [-]

No, it is still LLVM.

psunavy03 14 hours ago | parent | prev [-]

(Python has exited the chat)

pacifika 11 hours ago | parent | prev | next [-]

Ah lingo, where the programming metaphor was a theatre production!

TheRoque 3 hours ago | parent | prev | next [-]

> it takes several months worth of effort and spits it out in a few hours

lol

burnt-resistor 16 hours ago | parent | prev [-]

Hehe. In the "someone should make a website"™ department: using a crap tons of legacy protocols and plugins semi-interoperable with modern while offering legacy browsers loaded with legacy plugins something usable to test with, i.e.,

- SSL 2.0-TLS 1.1, HTTP/0.9-HTTP/1.1, ftp, WAIS, gopher, finger, telnet, rwho, TinyFugue MUD, UUCP email, SHOUTcast streaming some public domain radio whatever

- <blink>, <marquee>, <object>, XHTML, SGML

- Java <applet>, Java Web Start

- MSJVM/J++, ActiveX, Silverlight

- Flash, Shockwave (of course), Adobe Air

- (Cosmo) VRML

- Joke ActiveX control or toolbar that turns a Win 9x/NT-XP box into a "real" ProgressBar95. ;)

(Gov't mandated PSA: Run vintage {good,bad}ness with care.)

giancarlostoro 14 hours ago | parent | next [-]

To be fair, we have Flash emulators that run in modern browsers, and a Shockwave one as well, though it seems to be slowing down a bit in traction. Man, VRML brought me back. Don't forget VBScript!

lawlessone 15 hours ago | parent | prev [-]

why even write webpages or apps anymore just prompt an LLM everytime a user makes a request and write the page to send to the user :D

giancarlostoro 14 hours ago | parent [-]

This... was a Show HN a little while back, can't tell if you're making a joke or referring to that.

lawlessone 14 hours ago | parent [-]

oh god, it was a joke, but i want to see that. i hope they made it as a joke.

edit: I think i found it https://news.ycombinator.com/item?id=45783640

pron 14 hours ago | parent | prev | next [-]

The thing is that some imagined AI that can reliably produce reliable software will also likely be able to be smart enough to come up with the requirements on its own. If vibe coding is that capable, then even vibe coding itself is redundant. In other words, vibe coding cannot possibly be "the future", because the moment vibe coding can do all that, vibe coding doesn't need to exist.

The converse is that if vibe coding is the future, that means we assume there are things the AI cannot do well (such as come up with requirements), at which point it's also likely it cannot actually vibe code that well.

The general problem is that once we start talking about imagined AI capabilities, both the capabilities and the constraints become arbitrary. If we imagine an AI that does X but not Y, we could just as easily imagine an AI that does both X and Y.

anon84873628 8 hours ago | parent | next [-]

My bet is that it will be good enough to devise the requirements.

They already can brainstorm new features and make roadmaps. If you give them more context about the business strategy/goals then they will make better guesses. If you give them more details about the user personas / feedback / etc they will prioritize better.

We're still just working our way up the ladder of systematizing that context, building better abstractions, workflows, etc.

If you were to start a new company with an AI assistant and feed it every piece of information (which it structures / summarizes synthesizes etc in a systematic way) even with finite context it's going to be damn good. I mean just imagine a system that can continuously read and structure all the data from regular news, market reports, competitor press releases, public user forums, sales call transcripts, etc etc. It's the dream of "big data".

goatlover 2 hours ago | parent [-]

If it gets to that point, why is the customer even talking to a software company? Just have the AI build whatever. And if an AI assistant can synthesize every piece of business information, why is there a need for a new company? The end user can just ask it to do whatever.

whimsicalism 13 hours ago | parent | prev | next [-]

I agree with the first part which is basically 'being able to do a software engineers full job' is basically ASI/AGI complete.

But I think it is certainly possible that we reach a point/plateau where everything is just 'english -> code' compilation but that 'vibe coding' compilation step is really really good.

pron 11 hours ago | parent | next [-]

It's possible, but I don't see any reason to assume that it's more likely that machines will be able to code as well as working programmers yet not be able to come up with requirements or even ideas as well as working PMs. In fact, why not the opposite? I think that currently LLMs are better at writing general prose, offering advice etc.., than they are at writing code. They are better at knowing what people generally want than they are at solving complex logic puzzle that require many deduction steps. Once we're reduced to imagining what AI can and cannot do, we can imagine pretty much any capability or restriction we like. We can imagine something is possible, and we can just as well choose to imagine it's not possible. We're now in the realm of, literally, science fiction.

whimsicalism 10 hours ago | parent [-]

> It's possible, but I don't see any reason to assume that it's more likely that machines will be able to code as well as working programmers yet not be able to come up with requirements or even ideas as well as working PMs.

Ideation at the working PM level, sure. I meant more hard technical ideation - ie. what gets us from 'not working humanoid robot' to 'humanoid robot' or 'what do we need to do to get a detection of a higgs boson', etc. etc. I think it is possible to imagine a world where 'english -> code' (for reasonably specific english) is solved but not that level of ideation. If that level of ideation is solved, then we have ASI.

agentultra 7 hours ago | parent [-]

There are a ton of extremely Hard problems to solve there that we are not likely going to solve.

One: English is terribly non-prescriptive. Explaining an algorithm is incredibly laborious in spoken language and can contain many ambiguous errors. Try reading Euclid’s Elements. Or really any pre-algebra text and reproduce its results.

Fortunately there’s a solution to that. Formal languages.

Now LLMs can somewhat bridge that gap due to how frequently we write about code. But it’s a non-deterministic process and hallucinations are by design. There’s no escaping the fact that an LLM is making up the code it generates. There’s nothing inside the machine that is understanding what any of the data it’s manipulating means or how it affects the system it’s generating code for.

And it’s not even a tool.

Worse, we can’t actually ship the code that gets generated without a human appendage to the machine to take the fall for it if there are any mistakes in it.

If you’re trying to vibe code an operating system and have no idea what good OS design is or what good code for such a system looks like… you’re going to be a bad appendage for the clanker. If it could ship code on its own the corporate powers that be absolutely would fire all the vibe coders and you’d never work again.

Vibe coding is turning people into indentured corporate servants. The last mile delivery driver of code. Every input surveilled and scrutinized. Output is your responsibility and something you have little control over. You learn nothing when the LLM gives you the answer because you’ll forget it tomorrow. There’s no joy in it either because there is no challenge and no difficulty.

I think what pron is leading to is that there’s no need to imagine what these machines could potentially do. I think we should be looking at what they actually do, who they’re doing it to, and who benefits from it.

jimbokun 6 hours ago | parent | prev [-]

The only reason to imagine that plateau is because it’s painful to imagine a near future where humans have zero economic value.

goatlover 2 hours ago | parent | next [-]

It's not the only reason, technologies do plateau. We're not living in orbiting cities flying fusion powered vehicles around, even though we built rockets and nuclear power more than half a century ago.

tjr 6 hours ago | parent | prev [-]

Why is this desirable?

jimbokun 6 hours ago | parent [-]

It’s not, it’s horrifying.

But there doesn’t seem to be any off ramp, given the incentives of our current economic system.

keybored 12 hours ago | parent | prev [-]

This is the most coherent comment in this thread. People who believe in vibe coding but not in generalizing it to “engineering”... brother the LLMs speak English. They can even hold conversations with your uncle.

mavamaarten 17 hours ago | parent | prev | next [-]

Yup. I would never be able to give my Jira tickets to an LLM because they're too damn vague or incomplete. Getting the requirements first needs 4 rounds of lobbying with all stakeholders.

mrweasel 15 hours ago | parent | next [-]

We had a client who'd create incredibly detailed Jira tickets. Their lead developer (also their only developer) would write exactly how he'd want us to implement a given feature, and what the expected output would be.

The guy is also a complete tool. I'd point out that what he described wasn't actually what they needed, and that there functionality was ... strange and didn't actually do anything useful. We'd be told to just do as we where being told, seeing as they where the ones paying the bills. Sometimes we'd read between the lines, and just deliver what was actually needed, then we'd be told just do as we where told next time, and they'd then use the code we wrote anyway. At some point we got tired of the complaining and just did exactly as the tasks described, complete with tests that showed that everything worked as specified. Then we where told that our deliveries didn't work, because that wasn't what they'd asked for, but couldn't tell us where we misunderstood the Jira task. Plus the tests showed that the code functioned as specified.

Even if the Jira tasks are in a state where it seems like you could feed them directly to an LLM, there's no context (or incorrect context) and how is a chatbot to know that the author of the task is a moron?

SchemaLoad 13 hours ago | parent | next [-]

Every time I've received overly detailed JIRA tickets like this it's always been significantly more of a headache than the vague ones from product people. You end up with someone with enough tech knowledge to have an opinion, but separated enough from the work that their opinions don't quite work.

jordwest 11 hours ago | parent [-]

Same, I think there's an idealistic belief in people who write those tickets that something can be perfectly specified upfront.

Maybe for the most mundane, repetitive tasks that's true.

But I'd argue that the code is the full specification, so if you're going to fully specify it you might as well just write the code and then you'll actually have to be confronted with your mistaken assumptions.

sandblast an hour ago | parent | prev | next [-]

Maybe you'll appreciate having it pointed out to you: you should work on your usage of "where" vs "were".

zephen 13 hours ago | parent | prev | next [-]

> how is a chatbot to know that the author of the task is a moron?

Does it matter?

The chatbot could deliver exactly what was asked for (even if it wasn't what was needed) without any angst or interpersonal issues.

Don't get me wrong. I feel you. I've been there, done that.

OTOH, maybe we should leave the morons to their shiny new toys and let them get on with specifying enough rope to hang themselves from the tallest available structure.

rixed 7 hours ago | parent | prev | next [-]

Are you working at OpenAI?

mrweasel an hour ago | parent [-]

No, but now I'm curious about the inner workings of OpenAI.

ForOldHack 14 hours ago | parent | prev [-]

"The guy is also a complete tool." - Who says Hackers news is not filled with humor?

threethirtytwo 9 hours ago | parent | prev | next [-]

Who says an LLM can’t be taught or given a system prompt that enables them to do this?

Agentic AI can now do 20 rounds of lobbying with all stake holders as long as it’s over something like slack.

colechristensen 16 hours ago | parent | prev | next [-]

A significant part of my LLM workflow involves having the LLM write and update tickets for me.

It can make a vague ticket precise and that can be an easy platform to have discussions with stakeholders.

somebehemoth 16 hours ago | parent | next [-]

I like this use of LLM because I assume both the developer and ticket owner will review the text and agree to its contents. The LLM could help ensure the ticket is thorough and its meaning is understood by all parties. One downside is verbosity, but the humans in the loop can edit mercilessly. Without human review, these tickets would have all the downsides of vibe coding.

Thank you for sharing this workflow. I have low tolerance for LLM written text, but this seems like a really good use case.

SoftTalker 15 hours ago | parent | prev | next [-]

Wait until you learn that the people on the other side of your ticket updates are also using LLMs to respond. It's LLMs talking to LLMs now.

antisthenes 13 hours ago | parent | next [-]

Wait until you learn that most people's writing skills are that of below LLMs, so it's an actual tangible improvement (as long as you review the output for details not being missed, of course)

gerdesj 9 hours ago | parent [-]

Hoisted by your own petard ("me old fruit"):

"Wait until you learn that most people's writing skills are that of below LLMs"

... went askew at "that of below LLMs".

I'm an arse: soz!

colechristensen 15 hours ago | parent | prev [-]

The desired result is coming to a documented agreement on an interaction, not some exercise in argument that has to happen between humans.

I find having an LLM create tickets for itself to implement to be an effective tool that I rarely have to provide feedback for at all.

This seems like greybeards complaining that people who don't write assembly by hand.

Yeask 14 hours ago | parent [-]

Who has ever complained that kids don't write assembly by hand?

Stop being outraged for things that are only real on your mind.

colechristensen 13 hours ago | parent [-]

Speaking of things that are only real in your mind...

Am I outraged?

And yes, there absolutely was a vocal group of a certain type of programmer complaining about high level languages like C and their risks and inefficiency and lack of control insisting that real programmers wrote code in assembly. It's hard to find references because google sucks these days and I'm not really willing to put in the effort.

Yeask 13 hours ago | parent [-]

You made it up, that is why you can't find it.

remexre 12 hours ago | parent | next [-]

How's [0] or [1] for historical sources?

It's not surprising that Google doesn't turn these up, the golden era of this complaining was pre-WWW.

[0]: https://www.ee.torontomu.ca/~elf/hack/realmen.html [1]: https://melsloop.com/

Yeask 12 hours ago | parent [-]

Have you not noticed that the story you reference is so well know because... literally every single developer thinks people like Mel are crazy?

Mel or Terry Adams are the exception to the rule... Having that image of greybeards only come if you have never worked with one in real life, sorry you are biased.

llbbdd 12 hours ago | parent | prev [-]

https://xkcd.com/378/

PaulHoule 14 hours ago | parent | prev [-]

A significant part of my workflow is getting a ticket that is ill-defined or confused and rewriting it so that it is something I can do or not do.

From time to time I have talked over a ticket with an LLM and gotten back what I think is a useful analysis of the problem and put it into the text or comments and I find my peeps tend to think these are TLDR.

colechristensen 6 hours ago | parent [-]

Yeah, most people won't read things. At the beginning of my career I wrote emails that nobody read and then they'd be upset about not knowing this or that which I had already explained. Such is life, I stopped writing emails.

An LLM will be just as verbose as you ask it to be. The default response can be very chatty, but you can figure out how to ask it to give results in various lengths.

bjacobso 17 hours ago | parent | prev [-]

Claude Code et al. asks clarifying questions in plan mode before implementing. This will eventually extend to jira comments

swatcoder 17 hours ago | parent | next [-]

You think the business line stakeholder is going to patiently hang out in JIRA, engaging with an overly cheerful robot that keeps "missing the point" and being "intentionally obtuse" with its "irrelevant questions"?

This is how most non-technical stakeholders feel when you probe for consistent, thorough requirements and a key professional skill for many more senior developers and consultants is in mastering the soft skills that keep them attentive and sufficiently helpful. Those skills are not generic sycophancy, but involve personal attunement to the stakeholder, patience (exercising and engendering), and cycling the right balance between persistence and de-escalation.

Or do you just mean there will be some PM who acts as proxy between for the stakeholder on the ticket, but still needs to get them onto the phone and into meetings so the answers can be secured?

Because in the real world, the prior is outlandish and the latter doesn't gain much.

a_wild_dandan 16 hours ago | parent [-]

Businesses do whatever’s cheap. AI labs will continue making their models smarter, more persuasive. Maybe the SWE profession will thrive/transform/get massacred. We don’t know.

fooker 17 hours ago | parent | prev [-]

What do you mean by eventually?

this already exists.

jcelerier 17 hours ago | parent | prev | next [-]

To be honest I've never worked in an environment that seemed too complex. On my side my primary blocker is writing code. I have an unending list of features, protocols, experiments, etc. to implement, and so far the main limit was the time necessary to actually write the damn code.

swatcoder 17 hours ago | parent | next [-]

That sounds like papier mache more than bridge building, forever pasting more code on as ideas and time permit without the foresight to engineer or architect towards some cohesive long-term vision.

Most software products built that way seem to move fast at first but become monstrous abominations over time. If those are the only places you keep finding yourself in, be careful!

ebiester 15 hours ago | parent | next [-]

There are a wide number of small problems for which we do not need bridges.

As a stupid example, I hate the functionality that YouTube has to maintain playlists. However, I don't have the time to build something by hand. It turns out that the general case is hard, but the "for me" case is vibe codable. (Yes, I could code it myself. No, I'm not going to spend the time to do so.)

Or, using the Jira API to extract the statistics I need instead of spending a Thursday night away from the family or pushing out other work.

Or, any number of tools that are within my capabilities but not within my time budget. And there's more potential software that fits this bill than software that needs to be bridge-stable.

swatcoder 14 hours ago | parent [-]

Absolutely.

But the person I replied to seemed to be talking about a task agenda for their professional work, not a todo list of bespoke little weekend hobby hacks that might be handy "around the house".

16 hours ago | parent | prev [-]
[deleted]
f1shy 17 hours ago | parent | prev | next [-]

I don’t want to imply this is your case, because of course I’ve no idea how you work. But I’ve seen way too often, the reason for so many separate features is:

A) as stated by parent comment, the ones doing req. mngmt. Are doing a poor job of abstracting the requirements, and what could be done as one feature suddenly turns in 25.

B) in a similar manner as A, all solutions imply writing more and more code, and never refactor and abstract parts away.

mckn1ght 17 hours ago | parent | next [-]

My guess would be that the long list is maybe not self contained features (although still can be, I know I have more feature ideas than I can deliver in the next couple years myself), but behaviors or requirements of one or a handful of product feature areas.

When you start getting down into the weeds, there can be tons and tons of little details around state maintenance, accessibility, edge cases, failure modes, alternate operation modes etc.

That all combines to make lots of code that is highly interconnected, so you need to write even more code to test it. Sometimes much more than even the target implementations code.

16 hours ago | parent | prev [-]
[deleted]
iberator 16 hours ago | parent | prev | next [-]

Hehe. Try working for some telecoms dealing with gsm, umts, LTR and 5g.

fuzztester 16 hours ago | parent [-]

or banking. or finance. or manufacturing. or $other_enterprise_lob_area.

souce: been there, done some of that.

yoyohello13 10 hours ago | parent | prev [-]

Man I wish this was my job. I savor the days when I actually don’t have to do requirements gathering and can just code.

freetonik 3 hours ago | parent | prev | next [-]

>In my work, the bigger bottleneck to productivity is that very few people can correctly articulate requirements.

Agreed.

In addition, on the other side of the pipeline, code reviews are another bottleneck. We could have more MRs in review thanks to AI, but we can't really move at the speed of LLM's outputs unless we blindly trust it (or trust another AI to do the reviews, at which point what are we doing here at all...)

ozim 13 hours ago | parent | prev | next [-]

Unfortunately a lot of it is also because of illiteracy.

Lots of people hide the fact that they struggle with reading and a lot of people hide or try to hide the fact they don’t understand something.

antirez 4 hours ago | parent | prev | next [-]

This means your difficulty is not programming per se, but that you are working on a very suboptimal industry / company / system. With all due respect, you use programming at work, but true programming is the act of creating a system that you or your team designed and want to make alive. Confusing the reality of writing code for a living in some company with what Programming with capitalized P is, produces a lot of misunderstanding.

al_borland 14 hours ago | parent | prev | next [-]

I don’t mind the coding, it’s the requirements gathering and status meetings I want AI to automate away. Those are the parts I don’t like and where we’d see the biggest productivity gains. They are also the hardest to solve for, because so much of it is subjective. It also often involves decisions from leadership which can come with a lot of personal bias and occasionally some ego.

Vegenoid 10 hours ago | parent [-]

This is like the reverse centaur form of coding. The machine tells you what to make, and the human types the code to do the thing.

al_borland 10 hours ago | parent [-]

Well, when put like that it sounds pretty bad too.

I was thinking more that the human would tell the machine want to make. The machine would help flesh out the idea into actual requirements, and make any decisions the humans are too afraid or indecisive to make. Then the coding can start.

doug_durham 16 hours ago | parent | prev | next [-]

I don't think the author would disagree with you. Ad you point out coding is just one part of software development. I understand his point to be that the coding portion of the job is going to be very different going forward. A skilled developer is still going to need to understand frameworks and tradeoffs so that they can turn requirements into a potential solution. It just they might not be coding up the implementation.

ljm 13 hours ago | parent | prev | next [-]

I constantly run into issues where features are planned and broken down outside-in, and it always makes perfect sense if you consider it in terms of the pure user interface and behaviour. It completely breaks down when you consider the API, or the backend, is a cross-cutting concern across many of those tidy looking tasks and cannot map to them 1:1 without creating an absolute mess.

Trying to insert myself, or the right backend people, into the process, is more challenging now than it used to be, and a bad API can make or break the user experience as the UI gets tangled in the web of spaghetti.

It hobbles the effectiveness of whatever you could get an LLM to do because you’re already starting on the backfoot, requirements-wise.

MetaWhirledPeas 11 hours ago | parent | prev | next [-]

> very few people can correctly articulate requirements

This is the new programming. Programming and requirements are both a form of semantics. One conveys meaning to a computer at a lower level, the other conveys it to a human at a higher level. Well now we need to convey it at a higher level to an LLM so it can take care of the lower-level translation.

I wonder if the LLM will eventually skip the programming part and just start moving bits around in response to requirements?

immibis an hour ago | parent | next [-]

We have a machine that turns requirements into code. It's called a compiler. What happened to programming after the invention of the compiler?

lisbbb 8 hours ago | parent | prev [-]

My solution as a consultant was to build some artifact that we could use as a starting point. Otherwise, you're sitting around spinning your wheels and billing big $ and the pressure is mounting. Building something at least allows you to demonstrate you are working on their behalf with the promise that it will be refined or completely changed as needed. It's very hard when you don't get people who can send down requirements, but that was like 100% of the places I worked. I very seldom ran into people who could articulate what they needed until I stepped up, showed them something they could sort of stand on, and then go from there.

Mythical Man Month had it all--build one to throw away.

tshaddox 16 hours ago | parent | prev | next [-]

I like my requirements articulated so clearly and unambiguously that an extremely dumb electronic logic machine can follow every aspect of the requirements and implement them "perfectly" (limited only by the physical reliability of the machine).

deepsun 15 hours ago | parent [-]

Aka "coding". I see what you mean ;)

wouldbecouldbe 12 hours ago | parent | prev | next [-]

The solo projects I do are 10x, the team projects I do maybe 2-3x in productivity. I think in big companies it's much much less.

Highest gains are def in full stack frameworks (like nextjs), with Database ORM, and building large features in one go, not having to go back & forth with stakeholders or collegues.

keeda 14 hours ago | parent | prev | next [-]

This feels like addressing a point TFA did not make. TFA talks mostly about vibe-coding speeding up coding, whereas your comment is about software development as a whole. As you point out, coding is just one aspect of engineering and we must be clear about what "productivity" we are talking about.

Sure, there are the overhypers who talk about software engineers getting entirely replaced, but I get the sense those are not people who've ever done software development in their lives. And I have not seen any credible person claiming that engineering as whole can be done by AI.

On the other hand, the most grounded comments about AI-assisted programming everywhere are about the code, and maybe some architecture and design aspects. I personally, along with many other commenters here and actual large-scale studies, have found that AI does significantly boost coding productivity.

So yes, actual software engineering is much more than coding. But note that even if coding is, say, only 25% of engineering (there are actually studies about this), putting a significant dent in that is still a huge boost to overall productivity.

sureglymop 14 hours ago | parent | prev | next [-]

Also that requirements engineering in general isn't being done correctly.

I'm the last guy to be enthused about any "ritualistic" seeming businessy processes. Just let me code...

However, some things do need actually well defined adhered to processes where all parties are aware of and agreeing with the protocol.

jama211 7 hours ago | parent | prev | next [-]

This is like saying the typewriter won’t make a newspaper company more productive because the biggest bottlenecks are the research and review processes rather than the typing. It’s absolutely true, but it was still worth it to go up to typewriters, and the fact that people were spending less effort and time on the handwriting part helps all aspects of energy levels etc across their job.

legitster 17 hours ago | parent | prev | next [-]

Convince your PMs to use an LLM to help "breadboard" their requirements. It's a really good use case. They can ask their dumb questions they are afraid to and an LLM will do a decent job of parsing their ideas, asking questions, and putting together a halfway decent set of requirements.

gitremote 17 hours ago | parent | next [-]

PMs wouldn't be able to ask the right questions. They have zero experience with developer experience (DevEx) and they only have experience with user experience (UX).

tmp10423288442 16 hours ago | parent [-]

You can hope that an LLM might have some instructions related to DevEx in its prompt at least. There's no way to completely fix stupid, anymore than you can convince a naive vibecoder that just vibing a new Linux-compatible kernel written entirely in Zig is a feasible project.

Scarblac 16 hours ago | parent | prev [-]

How does the LLM get all the required knowledge about the domain and the product to ask relevant questions?

sh4rks 10 hours ago | parent [-]

Give it access to the codebase and a text file with all relevant business knowledge.

wiml 8 hours ago | parent [-]

Man ... if there were a text file with "all relevant business knowledge" in any job I've ever worked, it would have been revolutionary.

I'd say 25% of my work-hours are just going around to stakeholders and getting them to say what some of their unstated assumptions and requirements are.

burnte 14 hours ago | parent | prev | next [-]

"the bigger bottleneck to productivity is that very few people can correctly articulate requirements."

I've found the same way. I just published an AI AUP for my company and most of it is teaching folks HOW to use AI.

sputr 2 hours ago | parent | prev | next [-]

Yeah, the hardest part is understanding the requirements. But it then still takes hours and hours and hours to actually build the damn thing.

Except that now it still takes me the same time to understand the requirements ... and then the coding takes 1/2 or 1/3 of the time. The coding also always takes 1/3 of the effort so I leave my job less burned out.

Context: web app development agency.

I really don't understand this "if it does not replace me 100% it's not making me more productive" mentality. Yeah, it's not a perfect replacement for a senior developer ... but it is like putting the senior developer on a bike and pretending that it's not making them go any faster because they are still using their legs.

threethirtytwo 15 hours ago | parent | prev | next [-]

You can vibe ask the requirements. Not even kidding.

shortrounddev2 17 hours ago | parent | prev | next [-]

I write a library which is used by customers to implement integrations with our platform. The #1 thing I think about is not

> How do I express this code in Typescript?

it's

> What is the best way to express this idea in a way that won't confuse or anger our users? Where in the library should I put this new idea? Upstream of X? Downstream of Y? How do I make it flexible so they can choose how to integrate this? Or maybe I don't want to make it flexible - maybe I want to force them to use this new format?

> Plus making sure that whatever changes I make are non-breaking, which means that if I update some function with new parameters, they need to be made optional, so now I need to remember, downstream, that this particular argument may or may not be `undefined` because I don't want to break implementations from customers who just upgraded the most recent minor or patch version

The majority of the problems I solve are philosophical, not linguistic

epolanski 16 hours ago | parent | prev | next [-]

If AI doesn't make you more productive you're using it wrong, end of story.

Even if you don't let it author or write a single line of code, from collecting information, inspecting code, reviewing requirements, reviewing PRs, finding bugs, hell even researching information online, there's so many things it does well and fast that if you're not leveraging it, you're either in denial or have ai skill issues period.

geraneum 16 hours ago | parent | next [-]

Not to refute your point but I’ve met overly confident people with “AI skills” who are “extremely productive” with it, while producing garbage without knowing, or not being able to tell the difference.

epolanski 16 hours ago | parent | next [-]

You're describing lack of care and lack of professionalism, fire these people, nothing to do with the tools, it's the person using it the problem.

geraneum 15 hours ago | parent | next [-]

Yea I’m talking about people and that’s honestly what matters here. At the end of the day this tools is used by people and how people use it plays a big role in how we assess its usefulness.

mrwrong 13 hours ago | parent | prev | next [-]

this is known as the no true scotsman fallacy

ModernMech 15 hours ago | parent | prev [-]

We're trying very earnestly to create a world where being careful and professional is a liability. "Move fast and break things, don't ask permission, don't apologize for anything" is the dominant business model. Having care and practicing professionalism takes times and patience, which just translate to missed opportunities to make money.

Meanwhile, if you grift hard enough, you can become CEO of a trillion dollar company or President of the United States. Young people are being raised today seeing that you can raise billions on the promise building self driving cars in 3 years, not deliver even after 10 years, and nothing bad actually happens. Your business doesn't crater, you don't get sued into oblivion, your reputation doesn't really change. In fact, the bigger the grift, the more people are incentivized to prop it up. Care and professionalism are dead until we go back to an environment that is not so nurturing for grifts.

impulsivepuppet 14 hours ago | parent [-]

While I circumstantially agree, I hold it to be self-evident that the "optimal amount of grift is nonzero". I leave it to politicians to decide whether increased oversight, decentralization, or "solution X" is the right call to make.

ModernMech 12 hours ago | parent [-]

A little grift is expected. The real problem for us is when it's grift all the way down, and all the way up, to the extent even the President is grifting. Leaving it to the politicians in that case just means enabling maximum, economy-scale grift.

tick_tock_tick 10 hours ago | parent | prev | next [-]

I've not really seen this outside of extremely junior engineers. On the flip side I've seen plenty of seniors who can't manage to understand how to interact with AI tools come away thinking they are useless when just watching them for a bit it's really clear the issue is the engineer.

SchemaLoad 13 hours ago | parent | prev | next [-]

They just shovel the garbage on someone else who has to fact check and clean it up.

MangoCoffee 14 hours ago | parent | prev | next [-]

you can say that about overly confident people with "xyz" skills.

9rx 15 hours ago | parent | prev [-]

Garbage to whom? Are we talking about something that the user shudders to think about, or something more like a product the user loves, but behind the scenes the worst code ever created?

geraneum 15 hours ago | parent [-]

A lot of important details/parts of a system (not only code) that may seem insignificant to the end user could be really important in making a a system work correctly as a whole.

mdavidn 16 hours ago | parent | prev | next [-]

It sounds like you're the one in denial? AI makes some things faster, like working in a language I don't know very well. It makes other things slower, like working in a language I already know very well. In both cases, writing code is a small percentage of the total development effort.

epolanski 16 hours ago | parent [-]

No I'm not, I'm just sick of these edgy takes where AI does not improve productivity when it obviously does.

Even if you limit your AI experience to finding information online through deep research it's such a time saver and productivity booster that makes a lot of difference.

The list of things it can do for you is massive, even if you don't have it write a single line of code.

Yet the counter argument is like "bu..but..my colleague is pushing slop and it's not good at writing code for me", come on, then use it at things it's good at, not things you don't find it satisfactory.

lunar_mycroft 15 hours ago | parent | next [-]

It "obviously" does based on what, exactly? For most devs (and it appears you, based on your comments) the answer is "their own subjective impressions", but that METR study (https://arxiv.org/pdf/2507.09089) should have completely killed any illusions that that is a reliable metric (note: this argument works regardless of how much LLMs have improved since the study period, because it's about how accurate dev's impressions are, not how good the LLMs actually were).

keeda 13 hours ago | parent | next [-]

Yes, self-reported productivity is unreliable, but there have been other, larger, more rigorous, empirical studies on real-world tasks which we should be talking about instead. The majority of them consistently show a productivity boost. A thread that mentions and briefly discusses some of those:

https://news.ycombinator.com/item?id=45379452

lunar_mycroft 13 hours ago | parent [-]

Some (partial) counter points:

- I think given public available metrics, it's clear that this isn't translating into more products/apps getting shipped. That could be because devs are now running into other bottlenecks, but it could also indicate that there's something wrong with these studies.

- Most devs who say AI speeds them up assert numbers much higher than what those studies have shown. Much of the hype around these tools is built on those higher estimates.

- I won't claim to have read every study, but of the ones I have checked in the past, the more the methodology impressed me the less effect it showed.

- Prior to LLMs, it was near universally accepted wisdom that you couldn't really measure developer productivity directly.

- Review is imperfect, and LLMs produce worse code on average than human developers. That should result in somewhat lowered code quality with LLM usage (although that might be an acceptable trade off for some). The fact that some of these studies didn't find that is another thing that suggests there shortcomings in said studies.

keeda 12 hours ago | parent [-]

> - Most devs who say AI speeds them up assert numbers much higher than what those studies have shown.

I am not sure how much is just programmers saying "10x" because that is the meme, but if at all realistic numbers are mentioned, I see people claiming 20 - 50%, which lines up with the studies above. E.g. https://news.ycombinator.com/item?id=45800710 and https://news.ycombinator.com/item?id=46197037

> - Prior to LLMs, it was near universally accepted wisdom that you couldn't really measure developer productivity directly.

Absolutely, and all the largest studies I've looked at mention this clearly and explain how they try to address it.

> Review is imperfect, and LLMs produce worse code on average than human developers.

Wait, I'm not sure that can be asserted at all. Anecdotally not my experience, and the largest study in the link above explicitly discuss it and find that proxies for quality (like approval rates) indicate more improvement than a decline. The Stanford video accounts for code churn (possibly due to fixing AI-created mistakes) and still finds a clear productivity boost.

My current hypothesis, based on the DORA and DX 2025 reports, is that quality is largely a function of your quality control processes (tests, CI/CD etc.)

That said, I would be very interested in studies you found interesting. I'm always looking for more empirical evidence!

johnsmith1840 14 hours ago | parent | prev | next [-]

It's a good study. I also believe it is not an easy skill to learn. I would not say I have 10x output but easily 20%

When I was early in use of it I would say I sped up 4x but now after using it heavily for a long time some days it's 20% other days -20%

It's a very difficuly technology to know when you're one or the other.

The real thing to note is when you "feel" lazy and using AI you are almost certainly in the -20% category. I've had days of not thinking and I have to revert all the code from that day because AI jacked it up so much.

To get that speed up you need to be truly focused 100% or risk death by a thousand cuts.

hu3 15 hours ago | parent | prev [-]

not OP but I have a hard metric for you.

AI multiplied the amount of code I committed last month by 5x and it's exactly the code I would have written manually. Because I review every line.

model: Claude Sonnet 3.5/4.5 in VSCode GitHub Copilot. (GPT Codex and Gemini are good too)

lunar_mycroft 14 hours ago | parent [-]

I have no reason to think you're lying about the first part (although I'd point there's several ways that metric could be misleading, and approximately every piece of evidence available suggests it doesn't generalize), but the second part is very fishy. There's really no way for you to know whether or not you'd have written the same code or effectively the same code after reviewing existing code, especially when that review must be fairly cursory (because in order to get the speed up you claim, you must be spending much less time reviewing the code than it would have taken to write). Effectively, what you've done is moved the subjectivity from "how much does this speed me up?" to "is the output the same as if I had done it manually?"

hu3 14 hours ago | parent [-]

> There's really no way for you to know whether or not you'd have written the same code or effectively the same code after reviewing existing code.

There is in my case because it's just CRUD code. The pattern looks exactly like the code I wrote the month prior.

And this is where LLMs excel at, in my experience. "Given these examples, extrapolate to these other cases."

douglasisshiny 11 hours ago | parent | prev [-]

>No I'm not, I'm just sick of these edgy takes where AI does not improve productivity when it obviously does.

Feel free to cite said data you've seen supporting this argument.

gitremote 16 hours ago | parent | prev [-]

My company mandates AI usage and logs AI usage metrics as input to performance evaluation, so I use it every day. It's a Copilot subscription, though.

cujo 16 hours ago | parent [-]

why though? are they just using it as a proxy for "is 'gitremote' working today?"

porksoda 14 hours ago | parent | next [-]

The first time i asked it about some code in a busy monorepo and it said "oh bob asked me to do this last week when he was doing X, it works like Y and you can integrate it with your stuff like Z, would you like to update the spec now?"... I had some happy feelings. I dont know how they do it without clobbering the context, but it's great.

epolanski 16 hours ago | parent | prev | next [-]

Someone in management needs a promotion for his impact in revolutionizing and streamlining development from his charlatan managers.

15 hours ago | parent | prev | next [-]
[deleted]
Eldt 8 hours ago | parent | prev [-]

This is probably where they're getting their "90% of code is written with AI!!) metrics from

asimeqi 8 hours ago | parent | prev | next [-]

AI is making coding so cheap, you can now program a few versions of the API and choose what works better.

alfalfasprout 9 hours ago | parent | prev | next [-]

This is one reason I think spec driven development is never really going to work the way people claim it should. It's MUCH harder to write a truly correct, comprehensive, and useful spec than the code in many cases.

luckydata 14 hours ago | parent | prev | next [-]

Sounds like you work with inexperienced PMs that are not doing their job, did you try having a serious conversation about this pattern with them? I'm pretty sure some communication would go a long way towards getting you on a better collaboration groove.

gitremote 14 hours ago | parent [-]

I've been doing API development for over ten years and worked at different companies. Most PMs are not technical and it's the development team's job figure out the technical specifications for APIs we build. If you press the PMs, they will ask the engineering/development manager for the written technical requirements, and if the manager is not technical, they will assign it to the developers/engineers. Technical requirements for an API are really a system design question.

yieldcrv 15 hours ago | parent | prev [-]

and in reality, all the separate roles should be deprecated

we vibe requirements to our ticket tracker with an api key, vibe code ticket effort, and manage the state of the tickets via our commits and pull requests and deployments

just teach the guy the product manager is shielding you from not to micromanage and all the frictions are gone

in this same year I've worked at an organization that didn't allow AI use at all, and by Q2, Co-Pilot was somehow solving their data security concerns (gigglesnort)

in a different organization none of those restrictions are there and the productivity boost is through an order of magnitude greater