▲ | bigstrat2003 4 days ago | ||||||||||||||||||||||||||||||||||||||||
I don't understand people who say this. My knee jerk reaction (which I rein in because it's incredibly rude) is always "wow, that person must really suck at programming then". And I try to hold to the conviction that there's another explanation. For me, the vast, vast majority of the time I try to use it, AI slows my work down, it doesn't speed it up. As a result it's incredibly difficult to understand where these supposed 10x improvements are being seen. | |||||||||||||||||||||||||||||||||||||||||
▲ | loandbehold 4 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
For me, most of the value comes from Claude Code's ability to 1. research codebase and answer questions about it 2. Perform adhoc testing on the code. Actually writing code is icing on the cake. I work on large code base with more than two million lines of code. Claude Code's ability to find relevant code, understand its purpose, history and interfaces is very time saving. It can answer in minutes questions that would take hours of digging through the code base. Ad hoc testing is another thing. E.g. I can just ask it to test an API endpoint. It will find correct data to use in the database, call the endpoint and verify that it returned correct data and e.g. everything was updated in db correctly. | |||||||||||||||||||||||||||||||||||||||||
▲ | libraryofbabel 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Usually the "10x" improvements come from greenfield projects or at least smaller codebases. Productivity improvements on mature complex codebases are much more modest, more like 1.2x. If you really in good faith want to understand where people are coming from when they talk about huge productivity gains, then I would recommend installing Claude Code (specifically that tool) and asking it to build some kind of small project from scratch. (The one I tried was a small app to poll a public flight API for planes near my house and plot the positions, along with other metadata. I didn't give it the api schema at all. It was still able to make it work.) This will show you, at least, what these tools are capable of -- and not just on toy apps, but also at small startups doing a lot of greenfield work very quickly. Most of us aren't doing that kind of work, we work on large mature codebases. AI is much less effective there because it doesn't have all the context we have about the codebase and product. Sometimes it's useful, sometimes not. But to start making that tradeoff I do think it's worth first setting aside skepticism and seeing it at its best, and giving yourself that "wow" moment. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | bentcorner 4 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
It depends on what kind of code you're working on and what tools you're using. There's a sliding scale of "well known language + coding patterns" combined with "useful coding tools that make it easy to leverage AI", where AI can predict what you're going to type, and also you can throw problems at the AI and it is capable of solving "bigger" problems. Personally I've found that it struggles if you're using a language that is off the beaten path. The more content on the public internet that the model could have consumed, the better it will be. |