Remix.run Logo
Bridged7756 13 hours ago

I'm really dubious of such claims. Even if true, I think they're not seeing the whole picture. Sure, I could churn out code 10x as fast, but I still have to review it. I still have to think of the implementation. I still have to think of the test cases and write them. Now, adding the prerequisites for LLMs, I have to word this in a way the AI can understand it, which is extra mental load. I have to review code sometimes multiple times if it gets something wrong, and I have to re-generate, or make corrections, or sometimes end up fixing entire sections it generated, when I decide it just won't get this task right. Overally, while the typing, researching dependency docs (sometimes), time is saved, I still face the same cognitive load as ever, if not more, due to having extra code to review, having to think of prompting, I'm still limited by the same thing at the end of the day: my mental energy. I can write the code myself and it's if anything a bit slower. I still need to know my dependencies, I still need to know my codebase and all its gripes, even if the AI generates code correctly. Overally, the net complexity of my codebase is the same, and I don't buy the crap, also because I've never heard of stories about reducing complexity (refactoring), only about generating code and fixing codebases with testing and comments/docs (bad practice imo, unlikely the shallow docs generated will say anything more than what the code already makes evident). Anyways, I'm not a believer, I only use LLMs for scaffolding, rote tasks.