| ▲ | ahepp 8 hours ago | |||||||
It seems like this would be a really interesting field to research. Does AI assisted coding result in fewer bugs, or more bugs, vs an unassisted human? I've been thinking about this as I do AoC with Copilot enabled. It's been nice for those "hmm how do I do that in $LANGUAGE again?" moments, but it's also wrote some nice looking snippets that don't do quite what I want it to. And many cases of "hmmm... that would work, but it would read the entire file twice for no reason". My guess, however, is that it's a net gain for quality and productivity. Humans make bugs too and there need to be processes in place to discover and remediate those regardless. | ||||||||
| ▲ | sunshowers 7 hours ago | parent [-] | |||||||
I'm not sure about research, but I've used LLMs for a few things here at Oxide with (what I hope is) appropriate judgment. I'm currently trying out using Opus 4.5 to take care of a gnarly code reorganization that would take a human most of a week to do -- I spent a day writing a spec (by hand, with some editing advice from Claude Code), having it reviewed as a document for humans by humans, and feeding it into Opus 4.5 on some test cases. It seems to work well. The spec is, of course, in the form of an RFD, which I hope to make public soon. I like to think of the spec is basically an extremely advanced sed script described in ~1000 English words. | ||||||||
| ||||||||