▲ | kaishiro 6 days ago | |||||||
Jesus, that's a staggering figure to me coming from senior developers. I guess I'm the odd one out here, but ChatGPT is nothing more than an index of Stack Overflow (and friends) for me. It's essentially replaced Googling, but once I get the answer I need I'm still just slinging code like an asshole. Copying the output wholesale from any of these LLMs just seems crazy to me. | ||||||||
▲ | adw 5 days ago | parent [-] | |||||||
If you’re using ChatGPT directly for work then I believe that you are doing it so profoundly wrong, at this point, that you’re going to make really incorrect assumptions. As we have all observed, the models get things wrong, and if you’re wrong 5% of the time, then ten edits in you’re at 60-40. So you need to run them in a loop where they’re constantly sanity checking themselves—-linting, styling, typing and testing. In other words; calling tools in a loop. Agents are so much better than any other approach it’s comical precisely because they’re scaffolding to let models self-correct. This is likely somewhat domain-specific; I can’t imagine the models are that great at domains they haven’t seen much code in, so they probably suck at HFT infrastructure for example, though they are decent at reading docs by this point. There’s also a lot of skill in setting up the right documentation, testing structure, interfaces, etc etc etc to make the agents more reliable and productive (fringe benefit; your LLM-wielding colleagues actually write docs now, even if they’re full of em-dashes and emoji). You also need to be willing to let it write a bunch of code, look at it, work out why it’s structurally deficient, throw it away, and build the structure you want to guide it - but the typing is essentially free, so that’s tractable. Don’t view it as bad code, view it as a useful null result. But if you’re not using Claude Code or Codex or Roo or relatives, you’re living in an entirely different world to the people who have gone messianic about these things. | ||||||||
|