Remix.run Logo
jaggederest 2 days ago

The key is prompting. Prompt to within an inch of your life. Treat prompts as source code - edit them in files, use @ notation to bring them into the console. Use Claude to generate its own prompts - https://github.com/wshobson/commands/ and https://github.com/wshobson/agents/ are very handy, they include a prompt-engineer persona.

I'm at the point now where I have to yell at the AI once in a while, but I touch essentially zero code manually, and it's acceptable quality. Once I stopped and tried to fully refactor a commit that CC had created, but I was only able to make marginal improvements in return for an enormous time commitment. If I had spent that time improving my prompts and running refactoring/cleanup passes in CC, I suspect I would have come out ahead. So I'm deliberately trying not to do that.

I expect at some point on a Friday (last Friday was close) I will get frustrated and go build things manually. But for now it's a cognitive and effort reduction for similar quality. It helps to use the most standard libraries and languages possible, and great tests are a must.

Edit: Also, use the "thinking" commands. think / think hard / think harder / ultrathink are your best friend when attempting complicated changes (of course, if you're attempting complicated changes, don't.)

thayne 2 days ago | parent | next [-]

This works fairly well for well defined, repetitive tasks. But at least for me, if you have to put that much effort into the prompt, it is likely easier just to write the code myself.

masto 2 days ago | parent | next [-]

Sometimes I spend half an hour writing a prompt and realize that I’ve basically rubber-ducked the problem to the point where I know exactly what I want, so I just write the code myself.

I have been doing my best to give these tools a fair shake, because I want to have an informed opinion (and certainly some fear of being left behind). I find that their utility in a given area is inversely proportional to my skill level. I have rewritten or fixed most of the backend business logic that AI spits out. Even if it’s mostly ok on a first pass, I’ve been doing this gig for decades now and I am pretty good at spotting future technical debt.

On the other hand, I’m consistently impressed by its ability to save me time with UI code. Or maybe it’s not that it saves me time, but it gets me to do more ambitious things. I’d typically just throw stuff on the page with the excuse that I’m not a designer, and hope that eventually I can bring in someone else to make it look better. Now I can tell the robot I want to have drag and drop here and autocomplete there, and a share to flooberflop button, and it’ll do enough of the implementation that even if I have to fix it up, I’m not as intimidated to start.

theshrike79 a day ago | parent [-]

I've had the Corporate Approved CoPilot + Sonnet 4 write a full working React page for me based on a screenshot of a Figma model. (Not even through an MCP)

It even discovered that we have some internal components and used them for it.

Got me from 0-MVP in less then an hour. Would've easily taken me a full day

NitpickLawyer 2 days ago | parent | prev | next [-]

I've found it works really well for exploration as well. I'll give it a new library, and ask it to explore the library with "x goal" in mind. It then goes and agents away for a few minutes, and I get a mini-poc that more often than not does what I wanted and can also give me options.

xenobeb 2 days ago | parent | prev [-]

I am certain it has much to do with being in the training data or not.

I have loved GPT5 but the other day I was trying to implement a rather novel idea that would be a rather small function and GPT5 goes from a genius to an idiot.

I think HN has devolved into random conversations based on a random % of problems being in the training data or not. People really are having such different experiences with the models based on the novelty of the problems that are being solved.

At this point it is getting boring to read.

rco8786 2 days ago | parent | prev | next [-]

Have you made any attempt to quantify your efficiency/output vs writing the code yourself? I've done all of these things you've mentioned, with varying degrees of success. But also everything you're talking about doing is time consuming and eats away at whatever efficiency gain CC claims to offer.

jaggederest a day ago | parent [-]

Days instead of weeks, basically. Hard to truly quantify but I'm bloody minded enough to reimplement things three times to check and even with foresight the AI is faster.

shaunxcode 2 days ago | parent | prev | next [-]

I am convinced that this comment once read aloud in the cadence of Ginsberg is a work of art!

jaggederest 2 days ago | parent [-]

Now I'm trying to find a text-to-Ginsberg translator. Maybe he's who I sound like in my head.

fragmede 2 days ago | parent | prev [-]

How much voice control have you implemented?

jaggederest a day ago | parent [-]

None but it's on the list! Actually using it to prototype a complete audio visual tracking and annotation tool, so feeding it back into itself is a logical next step.