▲ | ticoombs 3 days ago | |||||||||||||||||||||||||
I used to joke about prompt engineering. But by jiminy it is a thing now. I swear sometimes I waste a good 10-20minutes writing up a good prompt and initial plan just so that claudecode can systematically implement something. My usage is nearly the same as OP. Plan plan plan save as a file and then new context and let it rip. That's the one thing I'd love, a good cli (currently using charm and cc) which allows me to have an implementation model, a plan model and (possibly) a model per sub agent. Mainly so I can save money by using local models for implementation and online for plans or generation or even swapping back. Charm has been the closest I've used so far allowing me to swap back and forth and not lose context. But the parallel sub-agent feature is probably one of the best things claudecode has. (Yes I'm aware of CCR, but could never get it to use more than the default model so :shrug:) | ||||||||||||||||||||||||||
▲ | NitpickLawyer 3 days ago | parent | next [-] | |||||||||||||||||||||||||
> I used to joke about prompt engineering. But by jiminy it is a thing now. This is the downside of living in a world of tweets, hot takes and content generation for the sake of views. Prompt engineering was always important, because GIGO has always been a ground truth in any ML project. This is also why I encourage all my colleagues and friends to try these tools out from time to time. New capabilities become aparent only when you try them out. What didn't work 6mo ago has a very good chance of working today. But you need a "feel" for what works and what doesn't. I also value much more examples, blogs, gists that show a positive instead of a negative. Yes, they can't count the r's in strawberry, but I don't need that! I don't need the models to do simple arithmetic wrong. I need them to follow tasks, improve workflows and help me. Prompt engineering was always about getting the "google-fu" of 10-15 years ago rolling, and then keeping up with what's changed, what works and what doesn't. | ||||||||||||||||||||||||||
▲ | BiteCode_dev 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
Projects using AI are the best documented and tested projects I worked on. They are well documented because you need context for the LLM to be performant. And they are well tested because the cost of producing test got lower since they can be half generated, while the benefit of having tests got higher, since they are guard rails for the machine. People constantly say code quality is going to plummet because of those tools, but I think the exact opposite is going to happen. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | scastiel 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||
I agree, prompt engineering really is the foundation of working with AI (whether it’s for coding or anything else). | ||||||||||||||||||||||||||
▲ | samrus 3 days ago | parent | prev [-] | |||||||||||||||||||||||||
honestly "prompt engineering" is just the vessel for architecting the solution. its like saying "diagram construction" really took off as a skill. its architecting with a new medium |