▲ | thayne 2 days ago | |||||||
This works fairly well for well defined, repetitive tasks. But at least for me, if you have to put that much effort into the prompt, it is likely easier just to write the code myself. | ||||||||
▲ | masto 2 days ago | parent | next [-] | |||||||
Sometimes I spend half an hour writing a prompt and realize that I’ve basically rubber-ducked the problem to the point where I know exactly what I want, so I just write the code myself. I have been doing my best to give these tools a fair shake, because I want to have an informed opinion (and certainly some fear of being left behind). I find that their utility in a given area is inversely proportional to my skill level. I have rewritten or fixed most of the backend business logic that AI spits out. Even if it’s mostly ok on a first pass, I’ve been doing this gig for decades now and I am pretty good at spotting future technical debt. On the other hand, I’m consistently impressed by its ability to save me time with UI code. Or maybe it’s not that it saves me time, but it gets me to do more ambitious things. I’d typically just throw stuff on the page with the excuse that I’m not a designer, and hope that eventually I can bring in someone else to make it look better. Now I can tell the robot I want to have drag and drop here and autocomplete there, and a share to flooberflop button, and it’ll do enough of the implementation that even if I have to fix it up, I’m not as intimidated to start. | ||||||||
| ||||||||
▲ | NitpickLawyer 2 days ago | parent | prev | next [-] | |||||||
I've found it works really well for exploration as well. I'll give it a new library, and ask it to explore the library with "x goal" in mind. It then goes and agents away for a few minutes, and I get a mini-poc that more often than not does what I wanted and can also give me options. | ||||||||
▲ | xenobeb 2 days ago | parent | prev [-] | |||||||
I am certain it has much to do with being in the training data or not. I have loved GPT5 but the other day I was trying to implement a rather novel idea that would be a rather small function and GPT5 goes from a genius to an idiot. I think HN has devolved into random conversations based on a random % of problems being in the training data or not. People really are having such different experiences with the models based on the novelty of the problems that are being solved. At this point it is getting boring to read. |