Remix.run Logo
chaboud 2 hours ago

The author seems to think they've hit upon something revolutionary...

They've actually hit upon something that several of us have evolved to naturally.

LLM's are like unreliable interns with boundless energy. They make silly mistakes, wander into annoying structural traps, and have to be unwound if left to their own devices. It's like the genie that almost pathologically misinterprets your wishes.

So, how do you solve that? Exactly how an experienced lead or software manager does: you have systems write it down before executing, explain things back to you, and ground all of their thinking in the code and documentation, avoiding making assumptions about code after superficial review.

When it was early ChatGPT, this meant function-level thinking and clearly described jobs. When it was Cline it meant cline rules files that forced writing architecture.md files and vibe-code.log histories, demanding grounding in research and code reading.

Maybe nine months ago, another engineer said two things to me, less than a day apart:

- "I don't understand why your clinerules file is so large. You have the LLM jumping through so many hoops and doing so much extra work. It's crazy."

- The next morning: "It's basically like a lottery. I can't get the LLM to generate what I want reliably. I just have to settle for whatever it comes up with and then try again."

These systems have to deal with minimal context, ambiguous guidance, and extreme isolation. Operate with a little empathy for the energetic interns, and they'll uncork levels of output worth fighting for. We're Software Managers now. For some of us, that's working out great.

vishnugupta an hour ago | parent | next [-]

Revolutionary or not it was very nice of the author to make time and effort to share their workflow.

For those starting out using Claude Code it gives a structured way to get things done bypassing the time/energy needed to “hit upon something that several of us have evolved to naturally”.

ffsm8 an hour ago | parent | next [-]

Its ai written though, the tells are in pretty much every paragraph.

ratsimihah an hour ago | parent [-]

I don’t think it’s that big a red flag anymore. Most people use ai to rewrite or clean up content, so I’d think we should actually evaluate content for what it is rather than stop at “nah it’s ai written.”

pmg101 15 minutes ago | parent | next [-]

I don't judge content for being AI written, I judge it for the content itself (just like with code).

However I do find the standard out-of-the-box style very grating. Call it faux-chummy linkedin corporate workslop style.

Why don't people give the llm a steer on style? Either based on your personal style or at least on a writer whose style you admire. That should be easier.

xoac 2 minutes ago | parent [-]

Because they think this is good writing. You can’t correct what you don’t have taste for. Most software engineers think that reading books means reading NYT non-fiction bestsellers.

shevy-java 36 minutes ago | parent | prev | next [-]

Well, real humans may read it though. Personally I much prefer real humans write real articles than all this AI generated spam-slop. On youtube this is especially annoying - they mix in real videos with fake ones. I see this when I watch animal videos - some animal behaviour is taken from older videos, then AI fake is added. My own policy is that I do not watch anything ever again from people who lie to the audience that way so I had to begin to censor away such lying channels. I'd apply the same rationale to blog authors (but I am not 100% certain it is actually AI generated; I just mention this as a safety guard).

ffsm8 19 minutes ago | parent | prev | next [-]

> I don’t think it’s that big a red flag anymore.

It is to me, because it indicates the author didn't care about the topic. The only thing they cared about is to write an "insightful" article about using llms. Hence this whole thing is basically linked-in resume improvement slop.

Not worth interacting with, imo

Also, it's not insightful whatsoever. It's basically a retelling of other articles around the time Claude code was released to the public (March-August 2025)

elaus an hour ago | parent | prev [-]

I think as humans it's very hard to abstract content from its form. So when the form is always the same boring, generic AI slop, it's really not helping the content.

rmnclmnt 38 minutes ago | parent [-]

And maybe writing an article or a keynote slides is one of the few places we can still exerce some human creativity, especially when the core skills (programming) is almost completely in the hands of LLMs already

petesergeant 12 minutes ago | parent | prev [-]

Here's mine! https://github.com/pjlsergeant/moarcode

marc_g 2 hours ago | parent | prev | next [-]

I’ve also found that a bigger focus on expanding my agents.md as the project rolls on has led to less headaches overall and more consistency (non-surprisingly). It’s the same as asking juniors to reflect on the work they’ve completed and to document important things that can help them in the future. Software Manger is a good way to put this.

zozbot234 26 minutes ago | parent [-]

AGENTS.md should mostly point to real documentation and design files that humans will also read and keep up to date. It's rare that something about a project is only of interest to AI agents.

jeffreygoesto 2 hours ago | parent | prev | next [-]

Oh no, maybe the V-Model was right all the time? And right sizing increments with control stops after them. No wonder these matrix multiplications start to behave like humans, that is what we wanted them to do.

baxtr an hour ago | parent [-]

So basically you’re saying LLMs are helping us be better humans?

shevy-java 36 minutes ago | parent [-]

Better humans? How and where?

CodeBit26 an hour ago | parent | prev | next [-]

I really like your analogy of LLMs as 'unreliable interns'. The shift from being a 'coder' to a 'software manager' who enforces documentation and grounding is the only way to scale these tools. Without an architecture.md or similar grounding, the context drift eventually makes the AI-generated code a liability rather than an asset. It's about moving the complexity from the syntax to the specification.

2 hours ago | parent | prev | next [-]
[deleted]
BoredPositron an hour ago | parent | prev | next [-]

It's alchemy all over again.

shevy-java 35 minutes ago | parent [-]

Alchemy involved a lot of do-it-yourself though. With AI it is like someone else does all the work (well, almost all the work).

BoredPositron 25 minutes ago | parent [-]

It was mainly a jab at the protoscientific nature of it.

blackarrow36 2 hours ago | parent | prev [-]

[flagged]