Remix.run Logo
OptionOfT 4 hours ago

Except... no one validates the generated tickets, and it's full of inaccuracies.

And then someone copy pastes it into Claude and now those inaccuracies become part of the code and tests.

satvikpendem 4 hours ago | parent | next [-]

The PMs validate it, why do you think they don't read over it to make sure it fits what they want? You might say "well they're lazy, look why they didn't write enough detail to start off with" but for lots of people, reviewing something to make sure it's close to what they want and then tweaking it is much easier than writing it from scratch.

It's the equivalent of writer's block and is why a common advice given to writers is to put anything they can onto the page then edit it later.

majormajor 3 hours ago | parent | next [-]

> The PMs validate it, why do you think they don't read over it to make sure it fits what they want?

The PM has historically often not had a detailed enough mental model of the implementation to spot the hard parts in advance or a detailed enough mental model of the customer desires to know if it's gonna be the right thing or not.

Those are the things that killed waterfall.

You can use LLM tools to help you improve both those areas. Synthesizing large amounts of text and looking for inconsistencies.

But the 80th-percentile-or-lower person who was already not working hard to try to get ahead of those things still isn't going to work any harder than the next person and so won't gain much of a real edge.

zxornand 3 hours ago | parent | prev | next [-]

I think validating a fully generated novel of a ticket, is much harder than thinking through the problem in the first place and creating your own ticket.

We see it with code too right? It’s harder to review code than to write it.

On top of that the LLM can work so fast that the amount of things that need validating grows!

This is where humans get lazy and the problems come in IMO. Whether its a PM not validating their ticket, or a dev doing a bad code review.

Add on to that that the incentives currently are to move fast and trust the AI.

It becomes clear to me that a lot of that review work either won’t be done at all, or won’t be nearly thorough enough.

mrbombastic 2 hours ago | parent | prev | next [-]

just this week i pushed back on some requirements in a very detailed product spec I was implementing to speed up time to ship. The pm had no idea what I was talking about because the requirements were invented by an LLM. This is not a bad PM, discipline doesn't scale.

BugsJustFindMe 4 hours ago | parent | prev [-]

> The PMs validate it, why do you think they don't read over it to make sure it fits what they want?

Hahahahahaha. Sorry, I couldn't help myself; this reads like satire. The answer is "real life experience says otherwise".

resters 4 hours ago | parent | prev [-]

This failure is human laziness, not an issue with the technology. People who use AI because they are trying to avoid doing work fall into a completely different category than people using AI as a force multiplier and for skills/capabilities enhancements / quality improvement.

OptionOfT 3 hours ago | parent | next [-]

It's also the only way to get those massive increases in productivity.

iv4122 2 hours ago | parent | prev | next [-]

I second this

danaris 3 hours ago | parent | prev [-]

This is very much a "you're holding it wrong" response.

If your technology relies on humans using it in ways that go against the ways they are inclined to use them, then that is an issue with the technology.