Remix.run Logo
fcpguru a day ago

These are the questions I ask engineers now:

Q1. How well do you prompt?

Q2. How reactionary are you to canceling a running prompt as soon as you see a wrong turn?

Q3. How well do you author PRs without ever saying "well AI did that part." Do you own your PRs?

Re: 1. It's a new skill set. Some engineers are not good at it. Some are amazing. And some are out of this world. You can spend a few minutes crafting a prompt that's so thorough, so well thought out but also succinct and to the point. How it touches on all the possible unknowns the AI might have as it works through the task and has direction. An out of this world prompt gets you between 90% and 95% completely done with a task that before AI would take weeks.

Re: 2. You can do very well with the first prompt but the whole other side of this is watching the AI work. Knowing when to watch very closely, and when it's okay to multi-task. Know when you immediately press ESC and tell the AI exactly what you just saw you didn't like. Catching it quickly is like early detection of cancer. You can save the rest of your day by preventing some messy dead ends before they even happen.

Re: 3. Not everyone on the team will be using AI to the level the Jedi are. And personal responsibility for your code, 90% AI generated or 0%, is still your responsibility. Once you make your PR and ask for feedback, you should no longer be able to say "oh I didn't write this part." Yes you did. No one on the team needs to know what % AI your PR was. And you don't need to know theirs.

artisin a day ago | parent | next [-]

Regarding point 1 - when you say "a few minutes," I'm wondering if we're talking about the same thing? I spent two solid months with Claude Max, before they imposed limits, running multiple Opus agents, and never once got anywhere close to "weeks of work" from a single prompt. Not even in the same zip code.

So I'm genuinely asking: could you pretty please share one of these prompts? Just one. I'd honestly prefer discovering I can't prompt my way out of a paper bag rather than facing the reality that I wasted a bucketload of time and money chasing such claims.

fcpguru a day ago | parent [-]

let's work backwards. Send me a task you agree is weeks of work and then I'll show you a prompt.

yvely a day ago | parent [-]

So basically your source is: "trust me bro, I'll prove it to you?" I think it is a relevant question. We cannot go around calling someone genius prompt engineers, and then skip the engineering part of noting down what actually works and how it can be replicated. Could we try to work backwards not from the perspective of the problems of the person asking the question, but perhaps work backwards from your claims (I.e. to where you have them from)

fcpguru 19 hours ago | parent [-]

not at all. I'm happy to provide the prompts. Let's just agree on what 2+ weeks coding problem we are trying to solve first. If I just pick one and show you the prompt I used you'll say "no way would that have taken over 2 weeks."

AstroBen a day ago | parent | prev | next [-]

A few minutes crafting a prompt to save weeks

Thats like a 2000x productivity boost. AI is amazing

sanitycheck a day ago | parent | prev [-]

Not sure I agree with 3.

I can have Claude write stuff that does basically the right thing and actually works but, as so many people say, it's like working with a junior dev. The only way I'm going to get code I'd want to present as my own is by sitting looking over their shoulder and telling them what to do constantly, and with a junior dev and with Claude (etc) that takes longer than just writing it myself.

AI-coded/assisted work should be flagged as such so everyone knows that it needs extra scrutiny. If we decided to let a horde of junior devs loose on a codebase we'd naturally pay more attention to PRs from new intern Jimmy than revered greybeard Bob and we shouldn't let Claude hide behind Bob's username.

fcpguru 19 hours ago | parent [-]

sounds like bad 1 and then bad execution of 2. If you do 1 and 2 well you get a PR for 3 that is NOT like working with a junior dev.

sanitycheck 18 hours ago | parent [-]

1 Is easy enough for trivial tasks but in a complex (typically horrible) production codebase nearly all the work is investigation and debugging. However good the initial prompt is, soon the context becomes flooded with log output and code and the LLM goes off the rails quite quickly.

Doing 2 well is the AI babysitting mentioned in the article. Of course you can stop it every minute and tell it to do something else, then watch it like a hawk to make sure it does it right, then clear context when it ignores you and makes the mistake you told it not to make. But that is often then slower than just doing the work yourself to begin with, probably leading to the findings we've all seen that LLM use is actually reducing productivity.

I think living with crappy AI code is the price we currently have to pay for getting development done quicker. Maybe in a year it'll have improved enough that we can ask it to clean up all the mess it made.

(Possibly I just have higher standards than most, other humans can be quite bad too.)

fcpguru 18 hours ago | parent [-]

"all the work is investigation and debugging" - Yes! Exactly you can ask the AI a bunch of questions first and really dig into what the codebase currently does. Then spend the time crafting that prompt that explains how to surgically do what needs to be done. If you are watching it every min like a hawk you are doing it wrong. You need to watch it more like a VERY smart junior dev and trust but verify. I'm not saying it's easy to get good at these new skill sets. But simply throwing your hands up and saying "I'll just walk everywhere vs using a bicycle" isn't a strategy that's going to work well.