| ▲ | MORPHOICES 2 days ago | |
A classic hacker news post that will surely interest coders from all walks of life! ~ After regular use of an AI coding assistant for some time, I see something unusual: my biggest wins came from neither better prompts nor a smarter model. They originated from the way I operated. At first, I thought of it as autocomplete. Afterwards, similar to a junior developer. In the end, a collaborator who requires constraints. Here is a framework I have landed on. First Step: Request for everything. Obtain acceleration, but lots of noise. Stage two: Include regulations. Less Shock, More Trust. Phase 3: Allow time for acting but don’t hesitate to perform reviews aggressively. A few habits that made a big difference. Specify what can be touched or come into contact with. Asking it to explain differences before applying them. Consider “wrong but confident” answers as signal to tighten scope. Wondering what others see only after time. What transformations occurred after the second or fourth week? When was the trust increased or reduced? What regulations do you wish you had added earlier? | ||