▲ | breakpointalpha 18 hours ago | |
Your millage may vary, but I just got Cursor (using Claude 4 Sonnet) to one shot a sequence of bash scripts that cleanup stale AWS resources. I pasted the Jira ticket description that I wrote, with a few examples and the script works perfectly. Saved me a few hours of bash writing and debugging because I can read bash, but not write it well. It seems that the smaller the task and the more tightly defined the input and output, the better the LLMs are at one-shotting. | ||
▲ | washadjeffmad 16 hours ago | parent | next [-] | |
Same. I interface with a team who refuses to conduct business in anything other than Excel, and because of dated corporate mindshare, their management sees them more as wizards instead of the odd ones out. "They're on top of it! They always email me the new file when they make changes and approve my access requests quickly." There are limits to my stubbornness, and my first use of LLMs for coding assistance was to ask for help figuring out how to Excel, after a mere three decades of avoidance. After engaging and learning more about their challenges, it turned out one of their "data feeds" was actually them manually copy/pasting into a web form with a broken batch import that they'd give up on submitting project requests for, which I quietly fixed so they got to retain their turnaround while they planned some other changes. Ultimately nothing grand, but I would never have bothered if I'd had to wade through the usual sort of learning resources available or ask another person. Being able to transfer and translate higher level literacy, though, is right up my alley. | ||
▲ | rebeccaskinner 17 hours ago | parent | prev | next [-] | |
I’ve had similar experiences where AI saved me a ton of time when I knew what I wanted and understood the language or library well enough to review but poorly enough that I’d gave been slow writing it because I’d have spent a lot of time looking things up. I’ve also had experiences where I started out well but the AI got confused, hallucinated, or otherwise got stuck. At least for me those cases have turned pathological because it always _feels_ like just one or two more tweaks to the prompt, a little cleanup, and you’ll be done, but you can end up far down that path before you realize that you need to step back and either write the thing yourself or, at the very least, be methodical enough with the AI that you can get it to help you debug the issue. The latter case happens maybe 20% of the time for me, but the cost is high enough that it erases most of the time savings I’ve seen in the happy path scenario. It’s theoretically easy to avoid by just being more thoughtful and active as a reviewer, but that reduces the efficiency gain in the happy path. More importantly, I think it’s hard to do for the same reason partially self driving cars are dangerous: humans are bad at paying attention well in “mostly safe and boring, occasionally disastrous” type settings. My guess is that in the end we’ll see less of the problematic cases. In part because AI improves, and in part because we’ll develop better intuition for when we’ve stepped onto the unproductive path. I think a lot of it too will also be that we adopt ways of working that minimize the pathological “lost all day to weird LLM issues” problems by trying to keep humans in the loop more deeply engaged. That will necessarily also reduce the maximum size of the wins we get, but we’ll come away with a net positive gain in productivity. | ||
▲ | jdiff 13 hours ago | parent | prev [-] | |
That's a dangerous game to play with Bash, I'm not sure if there's another language more loaded with footguns than that. |