▲ | noduerme 8 days ago | |
I feel I'm sort of stuck in the opposite situation of OP. I manage a few massive codebases that I simply cannot trust an AI to go mucking around with. The only type of serious AI coding experience I could get at this point would be to branch one of these and start experimenting on my own dime to see how good or bad the actual experience is. And that doesn't really seem worth it, because I know what I want to do with them (what's on the feature list that I'm being paid to develop)... and it feels like it would take more time to talk to an LLM and get it perfectly dialed in on any given feature, and ensure it was correct, than it would take to write it myself. And I'm not getting paid for it. I feel like I'd never use Claude seriously unless someone demanded I used it from day one on a greenfield project. And so while I get to keep evolving my coding skills, I'm a little worried that my "AI skills" will lag behind. | ||
▲ | sircastor 8 days ago | parent | next [-] | |
I do a lot of non-work AI stuff on my own, from pair programming with AI, asking it to generate whole things, to just asking it to clarify a general approach to a problem. FWIW, in a work environment (and I have not been given the go-ahead to start this at my work) I would start by supplementing my codebase. Add a new feature via AI coding, or maybe reworking some existing function. Start small. | ||
▲ | slau 8 days ago | parent | prev [-] | |
With all due respect, and I’m particularly anti-LLM, you sound exactly like someone who has never tried the tech. You can use LLMs without letting them run wild on the entire codebase. You have git, so you can see every minute change it makes. You can limit what files it’s allowed to change and how much context you give it. You don’t have to give it root on your machine to make it useful. You don’t have to “Jesus, Take the Wheel”. It is possible to try it out at a smaller scale, even on critical code. |