▲ | rybosome a day ago | |||||||||||||||||||||||||||||||||||||
Ok - not wrong at all. Now take that feedback and put it in a prompt back to the LLM. They’re very good at honing bad code into good code with good feedback. And when you can describe good code faster than you can write it - for instance it uses a library you’re not intimately familiar with - this kind of coding can be enormously productive. | ||||||||||||||||||||||||||||||||||||||
▲ | imiric a day ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||
> They’re very good at honing bad code into good code with good feedback. And they're very bad at keeping other code good across iterations. So you might find that while they might've fixed the specific thing you asked for—in the best case scenario, assuming no hallucinations and such—they inadvertently broke something else. So this quickly becomes a game of whack-a-mole, at which point it's safer, quicker, and easier to fix it yourself. IME the chance of this happening is directly proportional to the length of the context. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
▲ | aunty_helen a day ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
Nah. This isn’t true. Every time you hit enter you’re not just getting a jr dev, you’re getting a randomly selected jr dev. So, how did I end up with a logging.py, config.py, config in __init__.py and main.py? Well I prompted for it to fix the logging setup to use a specific format. I use cursor, it can spit out code at an amazing rate and reduced the amount of docs I need to read to get something done. But after its second attempt at something you need to jump in and do it yourself and most likely debug what was written. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
▲ | necovek a day ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
I do plan on experimenting with the latest versions of coding assistants, but last I tried them (6 months ago), none could satisfy all of the requirements at the same time. Perhaps there is simply too much crappy Python code around that they were trained on as Python is frequently used for "scripting". Perhaps the field has moved on and I need to try again. But looking at this, it would still be faster for me to type this out myself than go through multiple rounds of reviews and prompts. Really, a senior has not reviewed this, no matter their language (raciness throughout, not just this file). | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
▲ | barrell 19 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
I would not say it is “very good” at that. Maybe it’s “capable,” but my (ample) experience has been the opposite. I have found the more exact I describe a solution, the less likely it is to succeed. And the more of a solution it has come up with, the less likely it is to change its mind about things. Every since ~4o models, there seems to be a pretty decent chance that you ask it to change something specific and it says it will and it spits out line for line identical code to what you just asked it to change. I have had some really cool success with AI finding optimizations in my code, but only when specifically asked, and even then I just read the response as theory and go write it myself, often in 1-15% the LoC as the LLM | ||||||||||||||||||||||||||||||||||||||
▲ | BikiniPrince a day ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||
I’ve found AI tools extremely helpful in getting me up to speed with a library or defining an internal override not exposed by the help. However, if I’m not explicit in how to solve a problem the result looks like the bad code it’s been ingesting. |