| ▲ | A staff engineer's journey with Claude Code(sanity.io) |
| 490 points by kmelve a day ago | 333 comments |
| |
|
| ▲ | spicyusername 6 hours ago | parent | next [-] |
| I guess we're just going to be in the age of this conversation topic until everyone gets tired of talking about it. Every one of these discussions boils down to the following: - LLMs are not good at writing code on their own unless it's extremely simple or boilerplate - LLMs can be good at helping you debug existing code - LLMs can be good at brainstorming solutions to new problems - The code that is written by LLMs always needs to be heavily monitored for correctness, style, and design, and then typically edited down, often to at least half its original size - LLMs utility is high enough that it is now going to be a standard tool in the toolbox of every software engineer, but it is definitely not replacing anyone at current capability. - New software engineers are going to suffer the most because they know how to edit the responses the least, but this was true when they wrote their own code with stack overflow. - At senior level, sometimes using LLMs is going to save you a ton of time and sometimes it's going to waste your time. Net-net, it's probably positive, but there are definitely some horrible days where you spend too long going back and forth, when you should have just tried to solve the problem yourself. |
| |
| ▲ | rafaelmn 5 hours ago | parent | next [-] | | > but this was true when they wrote their own code with stack overflow. Searching for solutions and integrating examples found requires effort that develops into a skill. You would rarely get solutions that would just fit into the codebase from SO. If I give a task to you and you produce a correct solution on the initial review I now know I can trust you to deal with this kind of problem in the future. Especially after a few reviews. If you just vibed through the problem the LLM might have given you the correct solution - but there is no guarantee that it will do it again in the future. Just because you spent less effort on search/official docs/integration into the codebase you learned less about everything surrounding it. So using LLMs as a junior you are just breaking my trust, and we both know you are not a competent reviewer of LLM code - why am I even dealing with you when I'll get LLM outputs faster myself ? This was my experience so far. | | |
| ▲ | OvbiousError 14 minutes ago | parent | next [-] | | > So using LLMs as a junior you are just breaking my trust, and we both know you are not a competent reviewer of LLM code - why am I even dealing with you when I'll get LLM outputs faster myself ? This was my experience so far. So much this. I see a 1000 lines super complicated PR that was whipped up in less than a day and I know they didn't read all of it, let alone understand. | |
| ▲ | fhd2 4 hours ago | parent | prev [-] | | Like with any kind of learning, without a feedback loop (as tight as possible IMHO), it's not gonna happen. And there is always some kind of feedback loop. Ultra short cycle: Pairing with a senior, solid manual and automated testing during development. Reasonably short cycle: Code review by a senior within hours and for small subsets of the work ideally, QA testing by a seperate person within hours. Borderline too long cycle: Code review of larger chunks of code by a senior with days of delay, QA testing by a seperate person days or weeks after implementation. Terminally long feedback cycle: Critical bug in production, data loss, negative career consequences. I'm confident that juniors will still learn, eventually. Seniors can help them learn a whole lot faster though, if both sides want that, and if the organisation lets them. And yeah, that's even more the case than in the pre LLM world. | | |
| ▲ | DenisM an hour ago | parent [-] | | LLM can also help learning f you ask it what can be done better. Seniors can make prepromt so that company customs are taken into account. |
|
| |
| ▲ | chamomeal 5 hours ago | parent | prev | next [-] | | Yeah every time I see one of these articles posted on HN I know I'll see a bunch of comments like "well here's how I use claude code: I keep it on a tight leash and have short feedback loops, so that I'm still the driver, and have markdown files that explain the style I'm going for...". Which is fine lol but I'm tired of seeing the exact same conversations. It's exhausting to hear about AI all the time but it's fun to watch history happen. In a decade we'll look back at all these convos and remember how wild of a time it was to be a programmer. | | |
| ▲ | coldpie 4 hours ago | parent | next [-] | | I'm thiiiis close to writing a Firefox extension that auto-hides any HN headline with an LLM/AI-related keyword in the title, just so I can find something interesting on here again. | | |
| ▲ | codyb 3 hours ago | parent | next [-] | | It comes and goes in cycles... I remember the hay days of MVC frameworks and oh my this one is MVVC! ad nauseum for... years lol. I stopped coming here for a year or two, now I visit once a day or so and mostly just skim a couple threads. Eventually, this entire field... just starts to feel pretty cyclical. | |
| ▲ | dysoco 2 hours ago | parent | prev | next [-] | | I appreciate HN staying simple but a tag system like lobsters has would be pretty nice... | |
| ▲ | icdtea 2 hours ago | parent | prev | next [-] | | You can do this with a custom filter list in Ublock Origin, no custom extension necessary. | | |
| ▲ | coldpie 2 hours ago | parent [-] | | I'm thinking something that would actually use HN's "Hide" feature, so other stories will populate the page after the AI ones are hidden. Is that something uBO could do? | | |
| ▲ | icdtea 30 minutes ago | parent [-] | | You make a good point, I don't believe so. I currently block most LLM/AI threads and some pages can get quite sparse. Would love to check that out if you get around to putting that together! |
|
| |
| ▲ | ftkftk an hour ago | parent | prev [-] | | You could use an agentic AI coding tool to vibe code you one in minutes! /s | | |
| ▲ | coldpie an hour ago | parent [-] | | Honestly might give that a go, yeah. Brand new, low stakes, throwaway projects are one of the few things these tools are actually genuinely pretty useful for. | | |
| ▲ | ftkftk 20 minutes ago | parent [-] | | I agree. It may not be the most environmentally sensitive approach but throwaway one time tooling is a perfect use case. |
|
|
| |
| ▲ | red-iron-pine 4 hours ago | parent | prev [-] | | > Which is fine lol but I'm tired of seeing the exact same conversations. makes me think the bots are providing these conversations | | |
| |
| ▲ | automatic6131 2 hours ago | parent | prev | next [-] | | >- LLMs utility is high enough that it is now going to be a standard tool in the toolbox of every software engineer, but it is definitely not replacing anyone at current capability. Right! Problem, billions of dollars have been poured into this wrt to infrastructure, datacenters, compute and salaries. LLMs need to be at the level of replacing vast swathes of us to be worth it. LLMs are not going to be doing that. This is a collosal malinvestment. | | |
| ▲ | utyop22 2 hours ago | parent [-] | | Yeah eventually reality and fantasy have to converge. Nobody knows when. But it will. TBH the biggest danger is that all the hopes and dreams aren't materialised and the appetite for high-risk investments dissipates. We've had this period in which you can be money losing and its OK. But I believe we have passed the peak on that - and this is destined to blow up. |
| |
| ▲ | dawnerd 5 hours ago | parent | prev | next [-] | | On your last point I’ve found it about a wash in terms of time savings for me. For boiler plate / throw away code it’s decent enough - especially if I don’t care about code quality and only want a result. It’s wasted so much time trying to make it write actual production quality code. The consistency and over-verbose nature kill it for me. | |
| ▲ | sunir 4 hours ago | parent | prev | next [-] | | All true if you one shot the code. If you have a sophisticated agent system that uses multiple forward and backward passes, the quality improves tremendously. Based on my set up as of today, I’d imagine by sometime next year that will be normal and then the conversation will be very different; mostly around cost control. I wouldn’t be surprised if there is a break out popular agent control flow language by next year as well. The net is that unsupervised AI engineering isn’t really cheaper better or faster than human engineering right now. Does that mean in two years it will be? Possibly. There will be a lot of optimizations in the message traffic, token uses, foundational models, and also just the Moore’s law of the hardware and energy costs. But really it’s the sophistication of the agent systems that control quality more than anything. Simply following waterfall (I know, right? Yuck… but it worked) increased code quality tremoundously. I also gave it the SelfDocumentingCode pattern language that I wrote (on WikiWikiWeb) as a code review agent and quality improved tremendously again. | | |
| ▲ | theshrike79 4 hours ago | parent [-] | | > Based on my set up as of today, I’d imagine by sometime next year that will be normal and then the conversation will be very different; mostly around cost control. I wouldn’t be surprised if there is a break out popular agent control flow language by next year as well. Currently it's just VC funded. The $20 packages they're selling are in no way cost-effective (for them). That's why I'm driving all available models like I stole them, building every tool I can think of before they start charging actual money again. By then local models will most likely be at a "good enough" level especially when combined with MCPs and tool use so I don't need to pay per token for APIs except for special cases. | | |
| ▲ | tempoponet 3 hours ago | parent [-] | | Once local models are good enough there will be a $20 cloud provider that can give you more context, parameters, and t/s than you could dream of at home. This is true today with services like groq. | | |
| ▲ | sunir an hour ago | parent | next [-] | | Not exactly. Those models are based on intermittent usage. If you're using an AI engineer using a sophisticated agent flow, the usage is constant and continuous. That can price to an equivalent of a dedicated cube at home over 2 years. I had 3 projects running today. I hit my Claude Max Pro session limits twice today in about 90 minutes. I'm now keeping it down to 1 project, and I may interrupt it until the evening when I don't need Claude Web. If I could run it passively on my laptop, I would. | |
| ▲ | hatefulmoron an hour ago | parent | prev [-] | | Groq and Cerebras definitely have the t/s, but their hardware is tremendously expensive, even compared to the standard data center GPUs. Worth keeping in mind if we're talking about a $20 subscription. |
|
|
| |
| ▲ | lordgrenville 3 hours ago | parent | prev | next [-] | | Yes! Reminds me of one of my all-time favourite HN comments https://news.ycombinator.com/item?id=23003595 | |
| ▲ | theshrike79 4 hours ago | parent | prev | next [-] | | > - The code that is written by LLMs always needs to be heavily monitored for correctness, style, and design, and then typically edited down, often to at least half its original size For this language matters a lot, if whatever you're using has robust tools for linting and style checks, it makes the LLMs job a lot easier. Give it a rule (or a forced hook) to always run tests and linters before claiming a job is done and it'll iterate until what it produces matches the rules. But LLM code has a habit of being very verbose and covers every situation no matter how minuscule. This is especially grating when you're doing a simple project for local use and it's bootstrapping something that's enterprise-ready :D | | |
| ▲ | WorldMaker 2 hours ago | parent [-] | | If you force the LLM to solve every test failure this also can lead to the same breakdown models as very junior developers coding to the tests rather than the problems, I've seen all of: 1) I broke the tests, guess I should delete them. 2) I broke the tests, guess the code I wrote was wrong, guess I should delete all of that code I wrote. 3) I broke the tests, guess I should keep adding more code and scaffolding. Another abstraction layer might work? What if I just add skeleton code randomly, does this add random code whack-a-mole work? That last one can be particularly "fun" because already verbose LLM code skyrockets into baroque million line PRs when left truly unsupervised, and that PR still won't build or pass tests. There's no true understanding by an LLM. Forcing it to lint/build can be important/useful, but still not a cure-all, and leads to such fun even more degenerate cases than hand-holding it. |
| |
| ▲ | MontyCarloHall 6 hours ago | parent | prev | next [-] | | It's almost as if you could recapitulate each of these discussions using an LLM! | |
| ▲ | specialist 4 hours ago | parent | prev | next [-] | | > ...until everyone gets tired of talking about [LLMs] Small price to pay for shuffling Agile Manifesto off the stage. | |
| ▲ | dboreham 4 hours ago | parent | prev [-] | | My experience with the latest Claude Code has been: it's not nearly as bad as you say. |
|
|
| ▲ | rhubarbtree 11 hours ago | parent | prev | next [-] |
| Does anyone have a link to a video that uses Claude Code to produce clean robust code that solves a non trivial problem (ie not tic tac toe or a landing page) more quickly than a human programmer can write? I don’t want a “demo”, I want a livestream from an independent programmer unaffiliated with any AI company and thus not incentivised to hype. I want the code to have subsequently been deployed in production and demonstrably robust, without additional work outside of the livestream. The livestream should include code review, test creation, testing, PR creation. It should not be on a greenfield project, because nearly all coding is not. I want to use Claude and I want to be more productive, but my experience to date is that for writing code beyond autocomplete AI is not good enough and leads to low quality code that can’t be maintained, or else requires so much hand holding that it is actually less efficient than a good programmer. There are lots of incentives for marketing at the grassroots level. I am totally open to changing my mind but I need evidence. |
| |
| ▲ | M4v3R 9 hours ago | parent | next [-] | | I've live streamed how I've built a tower defense game over a span of a week entirely using AI. I've also written down all the prompts were used to create this game, you can read about it here: https://news.ycombinator.com/item?id=44463967 Mind you I've never wrote a non-trivial game before in my life. It would take me weeks to do this on my own without any AI assistance. Right now I'm working on a 3d world map editor for Final Fantasy VII that was also almost exclusively vibe-coded. It's almost finished and I plan a write up and a video about it when I'm done. Now of course you've made so many qualifiers in your post that you'll probably dismiss this as "not production", "not robust enough", "not clean" etc. But this doesn't matter to me. What matters is I manage to finish projects that I would not otherwise if not for the AI coding tools, so having them is a huge win for me. | | |
| ▲ | hvb2 9 hours ago | parent | next [-] | | > What matters is I manage to finish projects that I would not otherwise if not for the AI coding tools, so having them is a huge win for me. I think the problem is in your definition of finishing a project. Can you support said code, can you extend it, are you able to figure out where bugs are when they show up?
In a professional setting, the answer to all of those should likely be yes. That's what production code is. | | |
| ▲ | ffsm8 8 hours ago | parent [-] | | I disagree with your sentiment. The difference isn't what's finishing a project is, is the dissonance between what M4v3R and rhubarbtree understand when talking about "nontrivial production" software. When you're working in enterprise, you usually have multiple stakeholders each defining sometimes even conflicting requirements to behavior of your software. And you're required to adhere to these requirements stringently. That's an environment that's inherently a bad fit for vibe coding. It can still be used there, too, but you will not get a 2-3x speed up, because the LLM will always introduce minor behavioral changes - which aren't important in M4v3R scenario, but a complete deal breaket for rhubarbtree. From my own experience, I don't get a speed up at all via CoPilot agentic mode (Claude code is banned at my workplace). But I have had a significant boost in productivity in projects that don't need to adhere to any specific spec - namely projects I do an my own time (with Claude code right now). I still use Copilot agentic mode though. While I haven't timed myself, I don't think I'm faster with it whatsoever. It's just less mentally involved in a lot of scenarios, so it's less exhausting - leaving more energy for side projects . | | |
| ▲ | mattmanser 7 hours ago | parent [-] | | I don't believe it's to do with the requirements. I think you'll still hit the same problems if those greenfield projects grow. It's still fundamentally about the code. I think you're missing the difference between a 10/100k+ lines of code professional software vs a quick 3k lines greenfield project. In a few thousand lines of code you can get away with a massive amount of code bloat, quick hacks and inconsistent APIs. In a program that's anything more than a few thousand lines, you can't. It just becomes too confusing. You have to be deliberate. Code has to follow patterns so the cognitive load is lowered. Stuff has to be split up in a predictable manner. And there's another problem, sensible and predictable maintenance. Changes and fixes have to be targeted and specific. They have to be written to avoid side-effect. For organisation, it's been a huge effort on everyone's part these last 30 years to achieve that. Make code understandable, by organising it better. From one direction, languages have improved, with authors reducing boilerplate + cross-pollination of ideas between languages like anonymous methods. On the other, it's developers inventing + describing patterns or KISS or the single responsibility principle. Or even seemingly trivial things like choosing predictable folder structures and enforcing indentation rules[1]. I'm starting to feel that's often the main skill a senior dev brings to the table, organising code well. Better code organization has made it possible for developers to make larger program. Code organisation is a need that becomes a big problem if you're not doing it well in large projects, but not really a problem if you're not doing it well in small projects. And right now, AI isn't very good at code organisation. We might believe that you have to have a mental model of the whole program in your head, something an LLM is just not capable of right now. And I don't know if that's going to turn out to be a solvable problem as it seems like a huge context problem. For maintenance, I'm not sure. AI seems pretty terrible at it. It often rewrites everything and throws the baby out with the bathwater. Again, it's a context problem. Both could turn out to be easy to solve for this generation of AI, in the end. [1] Younger programmers will not believe that even 15/20 years ago it was still a common problem that developers did not bother to indent their code consistently. In my first two jobs I'd regularly hit inconsistently indented code. | | |
| ▲ | MGriisser 4 hours ago | parent [-] | | I personally find Claude Code has no real issues working and producing code in the 40k LoC Ruby on Rails repo I work in nor in the 45k LoC Elixir/Phoenix repo I work in. For the last few months I'd say 99% of all changes I do to both are purely via Claude Code, I almost never use my editor anymore at all. It's common things don't work on the first try or aren't exactly what I want but usually just giving an error to Claude or further instructions will fix it in an iteration or two. I think the code organization isn't amazing, but it's fine and frankly not that much of a concern to me usually as I'm usually just reading diffs and not digging around in the code much myself. | | |
| ▲ | ffsm8 2 hours ago | parent [-] | | Totally of topic, but the other day I was considering trying out elixir for a mainly vibe coded project, mainly because i thought the way you can structure code in it should be pretty much optimal for LLM driven development. I haven't tried it yet, but I thought elixirs easily implementable static analysis of code could make enforcement whenever the LLM goes off rails highly useful, and an umbrella architecture would make modularity well established. Modules could all define their own contexts via nested CLAUDE.md and subagents could be used to give it explicit implementation details. Did you try something like that before MGriisser? (successfully or not?) |
|
|
|
| |
| ▲ | sksrbWgbfK 4 hours ago | parent | prev [-] | | Unless you write tower defense games all day long for a living, I don't know how it's interesting. |
| |
| ▲ | infamousclyde 8 hours ago | parent | prev | next [-] | | Jon Gjengset (of MIT Missing Semester, Rust for Rustsceans, etc) shared a stream doing complex changes of increasing complexity to a geospatial math library in Rust. He’s an excellent engineer, and was able to pick apart AI-suggested changes liberally. The caveat is that the video is a bit long, but segmented nicely. I think he had a positive experience overall, but it was clear throughout the stream that he was not yielding control to a pure-agent workflow soon. https://youtu.be/eZ7DVHAK8hw?si=vWW4kz2qiRRceNMQ | | | |
| ▲ | ochronus 10 hours ago | parent | prev | next [-] | | I agree. Based on my very subjective and limited experience (plus friends/colleagues), when it comes to producing solutions, what you get from AI is what you get from your 2-day hackathon—then you spend months making it production-ready. And your starry-eyed CEO is asking the same old question: How come everything takes so long when a 2-person team over two days was able to produce a shiny new thing?!. sigh Could be used for early prototyping, though, before you hire your first engineers just to fire them 6 months later. | | |
| ▲ | jf22 3 hours ago | parent [-] | | Yeah but you get the two days of hacking in 15 minutes. And I highly doubt you spend months, as in 5+ weeks at the least making it production ready. What even is "production readiness?" 100% fully unit tested and ready for planetary hyper scale or something? 95% of the human generated software I work on is awful but somehow makes people money. | | |
| ▲ | ruszki 39 minutes ago | parent [-] | | First of all, you can rarely write down in English, what you want in 15 minutes… It’s even common to have longer specification, than its implementation. Just look at tests. Especially, if you want to do something which was never done before, the disparity can be staggering. Claude Code for example is also not that quick at all. It produces some code quickly, but even scaffolding three hello world level example projects together definitely takes more than an hour. And that’s with zero novelty. The first version of code is done quickly, but the continuous loop of self corrections after that takes a long time. Even with Serena, Context7, and other MCPs. And, of course, without real code review. That’s easily hours even with just few thousands lines of code, if it uses something which you don’t know. But I know that almost everybody gave up understanding “their” “own” code, during vibe coding. Even before AIs, it was a well known fact, that real code reviewing is hard, and people rarely did it. AI can make you quicker in certain situations, but these “15 minutes” claims are totally baseless. This is one reason why many people are against AIs, vibe coding, etc. These stupid claims which cannot hold even the smallest scrutiny. |
|
| |
| ▲ | coffeeri 10 hours ago | parent | prev | next [-] | | This video [0] is relevant, though it actually supports your point - it shows Claude Code struggling with non-trivial tasks and needing significant hand-holding. I suspect videos meeting your criteria are rare because most AI coding demos either cherry-pick simple problems or skip the messy reality of maintaining real codebases. [0] https://www.youtube.com/watch?v=EL7Au1tzNxE | | |
| ▲ | thecupisblue 9 hours ago | parent | next [-] | | Great video! Even more, shows a few things - how good it is with such a niche language but also exposes some direct flaws. First off, Rust represents quite a small part of the training dataset (last I checked it was under 1% of code dataset) in most public sets, so it's got waaay less training then other languages like TS or Java. You added 2 solid features, backed with tests and documentation and nice commit messages. 80% of devs would not deliver this in 2.5 hours. Second, there was a lot of time/token waste messing around with git and git messages. Few tips I noticed that could help you in the workflow: #1: Add a subagent for git that knows your style, so you don't poison direct claude context and spend less tokens/time fighting it. #2: Claude has hooks, if your favorite language has a formatter like rust fmt, just use hooks to run rust fmt and similar. #3: Limit what they test, as most LLM models tend to write overeager tests, including testing if "the field you set as null is null", wasting tokens. #5: Saying "max 50 characters title" doesn't really mean anything to the LLM. They have no inherent ability to count, so you are relying on probability, which is quite low since your context is quite filled at this point. If they want to count the line length, they also have to use external tools. This is an inherent LLM design issue and discussing it with an LLM doesn't get you anywhere really. | | |
| ▲ | newswasboring 7 hours ago | parent | next [-] | | > #3: Limit what they test, as most LLM models tend to write overeager tests, including testing if "the field you set as null is null", wasting tokens. Heh, I write this for some production code too (python). I guess because python is not typed, I'm testing if my pydantic implementation works. | |
| ▲ | komali2 8 hours ago | parent | prev [-] | | > #1: Add a subagent for git that knows your style, so you don't poison direct claude context and spend less tokens/time fighting it. I've not heard of this for, what does this mean practically? Some kind of invocation in claude? Opening another claude window? | | |
| ▲ | thecupisblue 6 hours ago | parent | next [-] | | Oh you're about to unlock a whole new level of token burning.
There is an /agents command that lets you define agents for specific tasks or areas. Each of them has their own context and their own rules. Then claude can delegate the work to them when appropriate, or you can tell it directly to use the subagent, i.e. a subagent for your frontend, backend, specific microservice, database, etc etc. Quite depends on your workflow which ones you create/need, but they are a really nice quality of life change. | |
| ▲ | Aeolun 7 hours ago | parent | prev [-] | | You ask claude to use an agent, and it’ll spawn a sub agent that takes a bunch of actions in a new context, then lets the original agent only know a summary of the results. |
|
| |
| ▲ | Aeolun 7 hours ago | parent | prev [-] | | > I suspect videos meeting your criteria are rare because most AI coding demos either cherry-pick simple problems or skip the messy reality of maintaining real codebases. Or we’re just having too much fun making stuff to make videos to convince people that are never going to be convinced. | | |
| ▲ | Difwif 5 hours ago | parent [-] | | I took a quick informal poll of my coworkers and the majority of us have found workflows where CC is producing 70-99% of the code on average in PRs. We're getting more done faster. Most of these people tend to be anywhere from 5-12 yrs professional experience. There are some concerns that maybe more bugs are slipping through (but also there's more code being produced). We agree most problems stem from:
1. Getting lazy and auto-accepting edits. Always review changes and make sure you understand everything.
2. Clearly written specification documents before starting complex work items
3. Breaking down tasks into a managable chunk of scope
4. Clean digestible code architecture. If it's hard for a human to understand (e.g: poor separation of concerns) it will be hard for the LLM too. But yeah I would never waste my time making that video. Having too much fun turning ideas into products to care about proving a point. |
|
| |
| ▲ | MontyCarloHall 6 hours ago | parent | prev | next [-] | | Forget a livestream, I want to hear from maintainers of complex, actively developed, and widely used open-source projects (e.g. ffmpeg, curl, openssh, sqlite). Highly capable coding LLMs have been out for long enough that if they do indeed have meaningful impact on writing non-trivial, non-greenfield/boilerplate code, it ought to be clearly apparent in an uptick of positive contributions to projects like these. | | |
| ▲ | stitched2gethr 4 hours ago | parent | next [-] | | This contains some specific data with pretty graphs: https://youtu.be/tbDDYKRFjhk?t=623 But if you do professional development and use something like Claude Code (the current standard, IMO) you'll quickly get a handle on what it's good at and what it isn't. I think it took me about 3-4 weeks of working with it at an overall 0x gain to realize what it's going to help me with and what it will make take longer. | | |
| ▲ | MontyCarloHall 3 hours ago | parent [-] | | This is a great conference talk, thanks for sharing! To summarize, the authors enlisted a panel of expert developers to review the quality of various pull requests, in terms of architecture, readability, maintainability, etc. (see 8:27 in the video for a partial list of criteria), and then somehow aggregate these criteria into an overall "productivity score." They then trained a model on the judgments of the expert developers, and found that their model had a high correlation with the experts' judgment. Finally, they applied this model to PRs across thousands of codebases, with knowledge of whether the PR was AI-assisted or not. They found a 35-40% productivity gain for easy/greenfield tasks, 10-15% for hard/greenfield tasks, 15-20% for easy/brownfield tasks, and 0-10% for hard/brownfield tasks. Most productivity gains went towards "reworked" code, i.e. refactoring of recent code. All in all, this is a great attempt at rigorously quantifying AI impact. However, I do take one major issue with it. Let's assume that their "productivity score" does indeed capture the overall quality of a PR (big assumption). I'm not sure this measures the overall net positive/negative impact to the codebase. Just because a PR is well-written according to a panel of expert engineers doesn't mean it's valuable to the project as a whole. Plenty of well-written code is utterly superfluous (trivial object setters/getters come to mind). Conversely, code that might appear poorly written to an outsider expert engineer could be essential to the project (the highly optimized C/assembly code of ffmpeg comes to mind, or to use an extreme example, anything from Arthur Whitney). "Reworking" that code to be "better written" would be hugely detrimental, even though the judgment of an outside observer (and an AI trained on it) might conclude that said code is terrible. |
| |
| ▲ | brookst 6 hours ago | parent | prev [-] | | So what percentage of human programmers, in the entire world, do you think contribute to meaningful projects like those? | | |
| ▲ | MontyCarloHall 6 hours ago | parent [-] | | I picked these specific projects because they are a) mature, b) complex, and as a result c) unlikely to have development needs for lots of new boilerplate code. I would estimate the majority of developers spend most of their time on problems encompassing all three of these, even if their software is not as meaningful/widely used as the previous examples. Everyone knows that LLMs are fantastic at generating greenfield boilerplate very quickly. They are an invaluable rapid prototyping/MVP generation tool, and that in itself is hugely useful. But that's not where developers spend most of their time. They spend it maintaining complicated, mature codebases, and the utility of LLMs is much less proven for that use case. This utility would be most easily measured in contributions to open-source projects, since all commits are public and maintainers have no monetary incentive to misrepresent the impact of AI [0, 1, 2, ...]. [0] https://www.businessinsider.com/anthropic-ceo-ai-90-percent-... [1] https://www.cnbc.com/2025/06/26/ai-salesforce-benioff.html [2] https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a... |
|
| |
| ▲ | stared 6 hours ago | parent | prev | next [-] | | I wouldn't dive for these. Vibe coding is a slot machine - sometimes you get wonderful results on the first prompt, more than often - not. So, a cherry-picked example is not a proof it works. If you want me to show an example of vibe coding, I bet I can migrate someone's blog to Astro with Claude Code faster than a frontend engineer. > It should not be on a greenfield project, because nearly all coding is not. Well, Claude Code does not work the best for existing projects. (With some exceptions.) | |
| ▲ | simonw 8 hours ago | parent | prev | next [-] | | Armin Ronacher (long-time Python and Rust open source community figure, creator of Flask and Jinja among others) has several YouTube videos that partially fit the bill. https://www.youtube.com/watch?v=sQYXZCUvpIc and https://www.youtube.com/watch?v=Y4_YYrIKLac and https://www.youtube.com/watch?v=tg61cevJthc | | |
| ▲ | ku1ik 6 hours ago | parent [-] | | I watched one of those videos and it was very underwhelming, imho not really selling Claude Code to anyone who isn’t convinced. |
| |
| ▲ | sirpalee 9 hours ago | parent | prev | next [-] | | I had success with it on a large, established project when using it for refactoring, moving around functions, implementing simple things and writing documentation. It failed when implementing complex new features and horribly went off the rails when trying to debug issues. Almost all its recommendations were wrong, and it kept trying to change things that certainly weren't the problem. | | |
| ▲ | apercu 7 hours ago | parent [-] | | This matches my experience as well. One unexpected benefit is that I learned a couple pieces of hardware inside and out because LLMs make so many mistakes. If I wouldn’t have used an LLM I wouldn’t have gone down all these rabbit holes based on incorrect info - I would have just read the docs and solved my use case but missed out on deeper understanding. Just reinforces my biases that LLMs are currently garbage for anything new and complicated. But they are a great interactive note taker and brainstorming tool. |
| |
| ▲ | adastral 2 hours ago | parent | prev | next [-] | | PostgresTV livestreams "vibe coding" 1h sessions implementing small PostgreSQL features with Cursor (mostly claude-4-sonnet model) every week, by experienced PostgreSQL contributors. [0] is their latest stream. I personally have not watched much, but it sounds just like what you are looking for! [0] https://www.youtube.com/watch?v=3MleDtXZUlM | |
| ▲ | mathieuh 10 hours ago | parent | prev | next [-] | | I actually don't think I've ever had AI solve a non-trivial problem by itself. I do find it useful but I always have to give it the breakthrough which it can then implement. | |
| ▲ | vincent_builds 3 hours ago | parent | prev | next [-] | | Author here. I think that's a great idea. I've considered live-streaming my work a few times, but all my work is on closed-source backend applications with sensitive code and data. If I ever get to work on an open-source product, I'll ask about live-streaming it. I think it would be a fun experience. Although I cannot show the live stream or the code, I am writing and deploying production code for a brownfield project. Two recent production features: 1. Quota crossing detection system for billable metrics
- Complex business logic for billing infrastructure
- Detects when usage crosses configurable thresholds across multiple metric types
- Time: 4 days while working on other smaller tasks in parallel work vs probably 10 days focused without AI 2. Sentry monitoring wrapper for metering cron jobs
- Reusable component wrapping all cron jobs with Sentry monitoring capabilities
- Time: 1 day parallelled with other tasks vs 2 days focused As you can probably tell, my work is not glamorous :D. It's all the head-scratching backend work, extending the existing system with more capabilities or to make it more robust. I agree there is a lot of hand-holding required, but I'm betting on the systems getting better as time goes on. We are only two years into this AI journey, and the capabilities will most likely improve over the next few years. | |
| ▲ | sunir 4 hours ago | parent | prev | next [-] | | I’ve built an agent system to quality control the output following my engineering know how. The quality is much better but it is much slower than a human engineer. However that’s irrelevant to me. If I can build two projects a day I am more productive than if I can build one. And more importantly I can build projects that increase my velocity and capability. The difference is I run my own business so that matters to me more than my value or aptitude as an engineer. | |
| ▲ | boesboes 10 hours ago | parent | prev | next [-] | | I've been using it to do all my work for the last month or two and have decided it's not worth it. I haven't made any recordings or anything, so this is purely my subjective experience:
it's ok at greenfield stuff with some hand-holding to do things properly all the time. It knows the framework well, but won't try to use it correctly and go off on weird detours to 'debug' things that fail because of it.
But on a bigger refactor of legacy code, that is well tested and the 'migration' process to the new architecture documented it just was very infuriating. One moment it seems to be doing alright and then suddenly I'm going backwards for days because it just makes things look like they work. It gets stuck on bad idea's and keeps trying them. Keeps making the same mistakes over and over, despite clear instruction on how to do it correctly.. I think it misses a feedback loop. Something that evaluates what went wrong, what works, what wont, and remembers that and then can use that to make better plans. From making sure it runs the tests correctly (instead of trying 5 different methods each time) to how to do TDD and what comments to add. | | |
| ▲ | sunnyam 9 hours ago | parent [-] | | I have the same opinion, but my worry with this attitude is that it's going to hold me back in the long run. A common thread in articles about developers using AI is that they're not impressed at first but then write more precise instructions and provide context in a more intuitive manner for the AI to read and that's the point at which they start to see results. Would these principles not apply to regular developers as well? I suspect that most of my disappointment with these tools is that I haven't spend enough time learning how to use them correctly. With Claude Code you can tell it what it did wrong. It's a bit hit-or-miss as to whether it will take your comments on board (or take them too literally) but I do think it's too powerful a tool to just ignore. I don't want someone to just come and eat my cake because they've figured out how to make themselves productive with it. | | |
| ▲ | apercu 6 hours ago | parent [-] | | I think of current state LLMs as precocious but green assistants that are sometimes useful but often screw up. It requires a significant amount of hand holding, still usually a net positive in my workflow but only (arbitrarily) a modest productivity bump (e.g. 10-15%). I feel like if I can get better at reigning in LLMs I can improve this productivity enhancement a bit more, but the idea that we can wholesale replace technical people is not realistic yet. If I were a non-tech, non-specialist and/or had no business skills/experience and my job was mostly office admin I would be retraining however, because those jobs may be over except as vanity positions. |
|
| |
| ▲ | dewey 10 hours ago | parent | prev | next [-] | | One of these things where you just have to put in the work yourself for a while and see how it works for your workflow and project. | |
| ▲ | sdeframond 7 hours ago | parent | prev | next [-] | | Does any experienced dev have experience outsourcing to another dev that produces clean robust code that solves a non trivial problem (ie not tic tac toe or a landing page) more quickly than she would by herself? I think not. The reason is about missing context. Such non-trivial problems have a lot of specific unwritten context. It takes a lot of effort to share that context. Often more than doing anything one self. | |
| ▲ | Kiro 10 hours ago | parent | prev | next [-] | | Very few people want to record themselves doing stuff or have an incentive to convince anyone except for winning internet arguments. | | |
| ▲ | nosianu 8 hours ago | parent [-] | | > Very few people .... have an incentive to convince anyone We are already only talking about the subset the writes AI blog posts, not about all of humanity. |
| |
| ▲ | lysecret 10 hours ago | parent | prev | next [-] | | https://news.ycombinator.com/item?id=44159166 | |
| ▲ | benterix 10 hours ago | parent | prev | next [-] | | I guess someone could make such a video, the question is, would anyone have the patience to watch it. | |
| ▲ | sneak 10 hours ago | parent | prev | next [-] | | Most of the code I write is greenfield projects. I’m pretty spoiled, I guess. Claude Code has helped me ship a lot of things I always wanted to build but didn’t have time to do. | |
| ▲ | wooque 8 hours ago | parent | prev | next [-] | | You got it wrong, the purpose of this blog post is not marketing Claude Code, but marketing their company. Writing about AI just happens to get more eyeballs. | |
| ▲ | brookst 6 hours ago | parent | prev [-] | | You’re coming at this from a highly biased and even angry position, which means I don’t think you’ll be satisfied with anything people can show you. Which isn’t entirely unreasonable; AI is not really there yet. If you took this moment and said AI will never get better, and tools and processes will never improve to better accommodate AI, and the only fair comparison is a top-tier developer, and the only legitimate scenario is high quality human-maintainable code at scale… then yes, AI coding is a lot of hype with little value. But that’s not what’s going on, is it? The trajectory here is breathtaking. A year ago you could have set a much lower bar and AI still would have failed. And the tooling to automate PRs and documentation was rough. AI is already providing massive leverage to both amateur and professional developers. They use the tools differently (in my world the serious developers mostly use it for boilerplate and tests). I don’t think you’ll be convinced if the value until the revolution is in the past. Which is fine! For many of us (me being in the amateur but lifelong programmer camp) it’s already delivering value that makes its imperfections worthwhile. Is the code I’m generating world class, ready to be handed over to humans at enterprise sclae? No, definitely not. But it exists, and the scale of my amateur projects has gone through the roof, while quality is also up because tests take near zero effort. I know it won’t convince you, and you have every right to be skeptical and dismiss the whole thing as marketing. But IMO rejecting this new tech in the short term means you’re in for a pretty rough time when the evidence is so insurmountable. Which might be a year or two. Or even three! |
|
|
| ▲ | swframe2 20 hours ago | parent | prev | next [-] |
| Preventing garbage just requires that you take into account the cognitive limits of the agent. For example ... 1) Don't ask for large / complex change. Ask for a plan but ask it to implement the plan in small steps and ask the model to test each step before starting the next. 2) For really complex steps, ask the model to write code to visualize the problem and solution. 3) If the model fails on a given step, ask it to add logging to the code, save the logs, run the tests and the review the logs to determine what went wrong. Do this repeatedly until the step works well. 4) Ask the model to look at your existing code and determine how it was designed to implement a task. Some times the model will put all of the changes in one file but your code has a cleaner design the model doesn't take into account. I've seen other people blog about their tricks and tips. I do still see garbage results but not as high as 95%. |
| |
| ▲ | rco8786 19 hours ago | parent | next [-] | | I feel like I do all of this stuff and still end up with unusable code in most cases, and the cases where I don't I still usually have to hand massage it into something usable. Sometimes it gets it right and it's really cool when it does, but anecdotally for me it doesn't seem to be making me any more efficient. | | |
| ▲ | enobrev 13 hours ago | parent | next [-] | | > it doesn't seem to be making me any more efficient That's been my experience. I've been working on a 100% vibe-coded app for a few weeks. API, React-Native frontend, marketing website, CMS, CI/CD - all of it without changing a single line of code myself. Overall, the resulting codebase has been better than I expected before I started. But I would have accomplished everything it has (except for the detailed specs, detailed commit log, and thousands of tests), in about 1/3 of the time. | | |
| ▲ | fourthark 10 hours ago | parent [-] | | How long would it have taken if you had written “the detailed specs, detailed commit log, and thousands of tests”? | | |
| ▲ | enobrev 2 hours ago | parent | next [-] | | The specs would not likely have happened at all, since this is a solo project; although this experience has led me to want to write these things out more thoroughly, even for myself. It's impressive how little work I need to put in going this route to have fairly thorough actionable specs for pretty much every major decision I've made through the process. The commits - some would be detailed, plenty would have been "typo" or "same as last commit, but works this time" The tests - Probably would have been decent for the API, but not as thorough. Likely non-existent for the UI. As for time - I agree with the other response - I wouldn't have taken the time. | |
| ▲ | veber-alex 9 hours ago | parent | prev [-] | | -1 time because it would have never have happened without AI |
|
| |
| ▲ | jaggederest 18 hours ago | parent | prev [-] | | The key is prompting. Prompt to within an inch of your life. Treat prompts as source code - edit them in files, use @ notation to bring them into the console. Use Claude to generate its own prompts - https://github.com/wshobson/commands/ and https://github.com/wshobson/agents/ are very handy, they include a prompt-engineer persona. I'm at the point now where I have to yell at the AI once in a while, but I touch essentially zero code manually, and it's acceptable quality. Once I stopped and tried to fully refactor a commit that CC had created, but I was only able to make marginal improvements in return for an enormous time commitment. If I had spent that time improving my prompts and running refactoring/cleanup passes in CC, I suspect I would have come out ahead. So I'm deliberately trying not to do that. I expect at some point on a Friday (last Friday was close) I will get frustrated and go build things manually. But for now it's a cognitive and effort reduction for similar quality. It helps to use the most standard libraries and languages possible, and great tests are a must. Edit: Also, use the "thinking" commands. think / think hard / think harder / ultrathink are your best friend when attempting complicated changes (of course, if you're attempting complicated changes, don't.) | | |
| ▲ | thayne 15 hours ago | parent | next [-] | | This works fairly well for well defined, repetitive tasks. But at least for me, if you have to put that much effort into the prompt, it is likely easier just to write the code myself. | | |
| ▲ | masto 6 hours ago | parent | next [-] | | Sometimes I spend half an hour writing a prompt and realize that I’ve basically rubber-ducked the problem to the point where I know exactly what I want, so I just write the code myself. I have been doing my best to give these tools a fair shake, because I want to have an informed opinion (and certainly some fear of being left behind). I find that their utility in a given area is inversely proportional to my skill level. I have rewritten or fixed most of the backend business logic that AI spits out. Even if it’s mostly ok on a first pass, I’ve been doing this gig for decades now and I am pretty good at spotting future technical debt. On the other hand, I’m consistently impressed by its ability to save me time with UI code. Or maybe it’s not that it saves me time, but it gets me to do more ambitious things. I’d typically just throw stuff on the page with the excuse that I’m not a designer, and hope that eventually I can bring in someone else to make it look better. Now I can tell the robot I want to have drag and drop here and autocomplete there, and a share to flooberflop button, and it’ll do enough of the implementation that even if I have to fix it up, I’m not as intimidated to start. | |
| ▲ | NitpickLawyer 11 hours ago | parent | prev | next [-] | | I've found it works really well for exploration as well. I'll give it a new library, and ask it to explore the library with "x goal" in mind. It then goes and agents away for a few minutes, and I get a mini-poc that more often than not does what I wanted and can also give me options. | |
| ▲ | xenobeb 5 hours ago | parent | prev [-] | | I am certain it has much to do with being in the training data or not. I have loved GPT5 but the other day I was trying to implement a rather novel idea that would be a rather small function and GPT5 goes from a genius to an idiot. I think HN has devolved into random conversations based on a random % of problems being in the training data or not. People really are having such different experiences with the models based on the novelty of the problems that are being solved. At this point it is getting boring to read. |
| |
| ▲ | rco8786 4 hours ago | parent | prev | next [-] | | Have you made any attempt to quantify your efficiency/output vs writing the code yourself? I've done all of these things you've mentioned, with varying degrees of success. But also everything you're talking about doing is time consuming and eats away at whatever efficiency gain CC claims to offer. | |
| ▲ | shaunxcode 17 hours ago | parent | prev | next [-] | | I am convinced that this comment once read aloud in the cadence of Ginsberg is a work of art! | | |
| ▲ | jaggederest 16 hours ago | parent [-] | | Now I'm trying to find a text-to-Ginsberg translator. Maybe he's who I sound like in my head. |
| |
| ▲ | fragmede 12 hours ago | parent | prev [-] | | How much voice control have you implemented? |
|
| |
| ▲ | nostrademons 19 hours ago | parent | prev | next [-] | | I've found that an effective tactic for larger, more complex tasks is to tell it "Don't write any code now. I'm going to describe each of the steps of the problem in more detail. The rough outline is going to be 1) Read this input 2) Generate these candidates 3) apply heuristics to score candidates 4) prioritize and rank candidates 5) come up with this data structure reflecting the output 6) write the output back to the DB in this schema". Claude will then go and write a TODO list in the code (and possibly claude.md if you've run /init), and prompt you for the details of each stage. I've even done this for an hour, told Claude "I have to stop now. Generate code for the finished stages and write out comments so you can pick up where you left off next time" and then been able to pick up next time with minimal fuss. | | |
| ▲ | hex4def6 19 hours ago | parent | next [-] | | FYI: You can force "Plan mode" by pressing shift-tab. That will prevent it from eagerly implementing stuff. | | |
| ▲ | jaggederest 18 hours ago | parent [-] | | > That will prevent it from eagerly implementing stuff. In theory. In practice, it's not a very secure sandbox and Claude will happily go around updating files if you insist / the prompt is bad / it goes off on a tangent. I really should just set up a completely sandboxed VM for it so that I don't care if it goes rm-rf happy. | | |
| ▲ | adastra22 18 hours ago | parent [-] | | Plan mode disabled the tools, so I don’t see how it would do that. A sandboxed devcontainer is worth setting up though. Lets me run it with —dangerously-skip-permissions | | |
| ▲ | faangguyindia 16 hours ago | parent | next [-] | | how can it plan if it does not have access to file read, search, bash tools to investigate things? If it has access to bash tools then it's going to write code, via echo or sed. | | | |
| ▲ | jaggederest 18 hours ago | parent | prev [-] | | I don't know either but I've seen it write to files in plan mode. Very confusing. | | |
| ▲ | faangguyindia 9 hours ago | parent | next [-] | | It does not write anything in plan mode, it's documented here it has only readonly tools available in plan mode: https://docs.anthropic.com/en/docs/claude-code/common-workfl... But here are fine prints, it has "exit plan mode" tool, documented here: https://minusx.ai/blog/decoding-claude-code/#appendix So it can exit plan mode on its own and you wouldn't know! | |
| ▲ | oxidant 16 hours ago | parent | prev | next [-] | | I've never seen it write a file in plan mode either. | |
| ▲ | EnPissant 16 hours ago | parent | prev [-] | | That's not possible. You are misremembering. | | |
| ▲ | sshine 16 hours ago | parent | next [-] | | I've seen it run commands that are naively assumed to be reading files or searching directories. I.e. not its own tools, but command-line executables. Its assumptions about these commands, and specifically the way it ran them, were correct. But I have seen it run commands in plan mode. | |
| ▲ | laborcontract 11 hours ago | parent | prev | next [-] | | No, it is possible. I just got it to write files both using Bash and its Write tools while in plan mode right now. | |
| ▲ | nomoreofthat 12 hours ago | parent | prev [-] | | It’s entirely possible. Claude’s security model for subagents/tasks is incoherent and buggy, far below the standard they set elsewhere in their product, and planning mode can use subagent/tasks for research. Permission limitations on the root agent have, in many cases, not been propagated to child agents, and they’ve been able to execute different commands. The documentation is incomplete and unclear, and even to the extent that it is clear it has a different syntax with different limitations than are used to configure permissions for the root agent. When you ask Claude itself to generate agent configurations, as is recommended, it will generate permissions that do not exist anywhere in the documentation and may or may not be valid, but there’s no error admitted if an invalid permission is set. If you ask it to explain, it gets confused by their own documentation and tells you it doesn’t know why it did that. I’m not sure if it’s hallucinating or if the agent-generating-agent has access to internal detail details that are not documented anywhere in which the normal agent can’t see. Anthropic is pretty consistently the best in this space in terms of security and product quality. They seem to actually care about doing software engineering properly. (I’ve personally discovered security bugs in several competing products that are more severe and exploitable than what I’m talking about here.) I have a ton of respect for Anthropic. Unfortunately, when it comes to sub agents in Claude code, they are not living up to standard they have set. |
|
|
|
|
| |
| ▲ | yahoozoo 17 hours ago | parent | prev [-] | | How does a token predictor “apply heuristics to score candidates”? Is it running a tool, such as a Python script it writes for scoring candidates? If not, isn’t it just pulling some statistically-likely “score” out of its weights rather than actually calculating one? | | |
| ▲ | astrange 15 hours ago | parent | next [-] | | Token prediction is the interface. The implementation is a universal function approximator communicating through the token weights. | |
| ▲ | imtringued 4 hours ago | parent | prev [-] | | You can think of the K(=key) matrix in attention as a neural network where each token is turned into a tiny classifier network with multiple inputs and a single output. The softmax activation function picks the most promising activations for a given output token. The V(=value) matrix forms another neural network where each token is turned into a tiny regressor neural network that accepts the activation as an input and produces multiple outputs that are summed up to produce an intermediate token which is then fed into the MLP layer. From this perspective the transformer architecture is building neural networks at runtime. But there are some pretty obvious limitations here: The LLM operates on tokens, which means it can only operate on what is in the KV-cache/context window. If the candidates are not in the context window, it can't score them. |
|
| |
| ▲ | plaguuuuuu 17 hours ago | parent | prev | next [-] | | I've been using a few LLMs/agents for a while and I still struggle with getting useful output from it. In order for it not to do useless stuff I need to expend more energy on prompting than writing stuff myself. I find myself getting paranoid about minutia in the prompt, turns of phrase, unintended associations in case it gives shit-tier code because my prompt looked too much like something off experts-exchange or whatever. What I really want is something like a front-end framework but for LLM prompting, that takes away a lot of the fucking about with generalised stuff like prompt structure, default to best practices for finding something in code, or designing a new feature, or writing tests.. | | |
| ▲ | Mars008 14 hours ago | parent [-] | | > What I really want is something like a front-end framework but for LLM prompting It's not simple to even imagine ideal solution. The more you think about it the more complicated your solution becomes. Simple solution will be restricted to your use cases. Generic is either visual or a programming language. I's like to have visual constructor, graph of actions, but it's complicated. The language is more powerful. |
| |
| ▲ | dontlaugh 19 hours ago | parent | prev | next [-] | | At that point, why not just write the code yourself? | | |
| ▲ | lucasyvas 19 hours ago | parent | next [-] | | I reached this conclusion pretty quickly. With all the hand holding I can write it faster - and it’s not bragging, almost anyone experienced here could do the same. Writing the code is the fast and easy part once you know what you want to do. I use AI as a rubber duck to shorten that cycle, then write it myself. | | |
| ▲ | jprokay13 18 hours ago | parent | next [-] | | I am coming back to this. I’ve been using Claude pretty hard at work and for personal projects, but the longer I do it, the more disappointed I become with the quality of output for anything bigger than a script.
I do love planning things out and clarifying my thoughts. It’s a turbocharged rubber duck - but it’s not a great engineer | | |
| ▲ | searene 12 hours ago | parent | next [-] | | Me too. I’ve been playing with various coding agents such as Cursor, Claude Code, and GitHub Copilot for some time, and I would say that their most useful feature is educating me. For example, they can teach me a library I haven’t used before, or help me debug a production issue. Then I would choose to write the code by myself after I’ve figured everything out with their help. Writing code by myself is definitely faster in most cases. | | |
| ▲ | bootsmann 6 hours ago | parent [-] | | > For example, they can teach me a library I haven’t used before. How do you verify it is teaching you the correct thing if you don't have any baseline to compare it to? |
| |
| ▲ | bcrosby95 18 hours ago | parent | prev | next [-] | | My thoughts on scripts are: the output is pretty bad too, but it doesn't matter as much in a script, because its just a short script, and all that really matters is that it kinda works. | |
| ▲ | utyop22 18 hours ago | parent | prev [-] | | What you're describing is a glorified mirror. Doesn't that sound ridiculous to you? | | |
| ▲ | interstice 12 hours ago | parent | next [-] | | That's what rubber ducking is | | |
| ▲ | utyop22 9 hours ago | parent [-] | | It sounds better when you get more specific about what it is. Many people have fallen prey to this and gone a tad loopy. |
| |
| ▲ | jprokay13 14 hours ago | parent | prev [-] | | I am still working on tweaking how I work and design with Claude to hopefully unlock a level of output that I’m happy with. Admittedly, part of it is my own desire for code that looks a certain way, not just that which solves the problem. |
|
| |
| ▲ | 2muchcoffeeman 19 hours ago | parent | prev | next [-] | | I’ve been trapped in a hole of “can I get the agent to do this?” And the change would have taken me 1/10th the time. Choosing the battles to pick is part of the skill at the moment. I use AI for a lot of boiler plate, tedious tasks I can’t quite do a vim recording for, small targeted scripts. | | |
| ▲ | skydhash 17 hours ago | parent | next [-] | | How many of these boilerplate do you actually have to do? Any script or complicated command that I had to write was worthy to be recorded in some bash alias or preserved somewhere. But they mostly live in my bash history or right next to the project. The boilerplate argument is becoming quite old. | | |
| ▲ | indiosmo 15 hours ago | parent | next [-] | | One recent example of boilerplate for me is I’ve been writing dbt models and I get it to write the schema.yml file for me based on the sql. It’s basically just a translation, but with dozens of tables, each with dozens of columns it gets tedious pretty fast. If given other files from the project as context it’s also pretty good at generating the table and column descriptions for documentation, which I would probably just not write at all if doing it by hand. | |
| ▲ | 2muchcoffeeman 15 hours ago | parent | prev [-] | | I’m doing a lot of upgrades to neglected projects at the moment and I often need to do the same config over and over to multiple projects. I guess I could write a script, or get AI to write a script, but there’s no standard between projects. So I need the same thing over and over but from slightly different starting points. I think you need to imagine all the things you could be doing with LLMs. For me the biggest thing is so many tedious things are now unlocked. Refactors that are just slightly beyond the IDE, checking your config (the number of typos it’s picked up that could take me hours because eyes can be stupid), data processing that’s similar to what you have done before but different enough to be annoying. |
| |
| ▲ | shortstuffsushi 15 hours ago | parent | prev [-] | | A similar, non-LLM battle, is a global find and replace, but _not quite identical_ everywhere. Do I just go through the 20 files and do it myself, or try to get clever with regex? Which is ultimately faster... | | |
| ▲ | baq 11 hours ago | parent | next [-] | | I’ve just had to do just this, a one line prompt and one example was the difference between mind numbing work and a comfortable cup of coffee away from the monitor. | |
| ▲ | 2muchcoffeeman 11 hours ago | parent | prev [-] | | In this case LLM is probably the answer. I’ve done this exact thing. No messing with regex or manual work. Type a sentence and examine the result in a diff. |
|
| |
| ▲ | catdog 11 hours ago | parent | prev [-] | | Writing the code in the grand scheme of things isn't the hard part in software development. The hard parts are architecture and actually building the right thing, something an LLM can't really help you with. It's not AI, there is no intelligence. A language model as the name says deals with language. Current ones are surprisingly good at it but it's still not more than that. | | |
| ▲ | cpursley 8 hours ago | parent [-] | | What? Leading edge LLMs are great at architecture, schema design and that sort of thing if you give them enough context and are not working on anything too esoteric. I’d argue they are better at this than the actual coding part. |
|
| |
| ▲ | harrall 14 hours ago | parent | prev | next [-] | | I don’t do much of the deep prompting stuff but I find AI can write some code faster than I can and accurately most of the time. You just need to learn what those things are. But I can’t tell you any useful tips or tricks to be honest. It’s like trying to teach a new driver the intuition of knowing when to brake or go when a traffic light turns yellow. There’s like nothing you can really say that will be that helpful. | |
| ▲ | utyop22 18 hours ago | parent | prev | next [-] | | I'm finding what's happening right now kinda bizarre. The funny thing is - we need less. Less of everything. But an up-tick in quality. This seems to happen with humans with everything - the gates get opened, enabling a flood of producers to come in. But this causes a mountain of slop to form, and overtime the tastes of folks get eroded away. Engineers don't need to write more lines of code / faster - they need to get better at interfacing with other folks in the business organisation and get better at project selection and making better choices over how to allocate their time. Writing lines of code is a tiny part of what it takes to get great products to market and to grow/sustain market share etc. But hey, good luck with that - ones thinking power is diminished overtime by interacing with LLMs etc. | | |
| ▲ | mumbisChungo 18 hours ago | parent [-] | | >ones thinking power is diminished overtime by interacing with LLMs etc. Sometimes I reflect on how much more efficiently I can learn (and thus create) new things because of these technologies, then get anxiety when I project that to everyone else being similarly more capable. Then I read comments like this and remember that most people don't even want to try. | | |
| ▲ | utyop22 17 hours ago | parent [-] | | And? Go create more stuff. Come back and post here when you have built something that has commercial success. Show us all how it's done. Until then go away - more noise doesn't help. | | |
| ▲ | mumbisChungo 17 hours ago | parent [-] | | I don't think there's anything I could tell you about the companies I've built that would dissuade you from your perspective that everyone is as intellectually lazy as your projection suggests. | | |
| ▲ | skydhash 17 hours ago | parent [-] | | Not GP, but I really want to know how your process is better than anyone else. People have produced quite good software (as in solving problems) on CPU that’s less powerful than what’s on my smart plug. And whose principles is still defining today’s world. | | |
| ▲ | mumbisChungo 16 hours ago | parent [-] | | I just find that I learn faster by interrogating (or being interrogated by) a lossy encyclopedia than I do by reading textbooks or stackoverflow. I'm still the one doing the doing after the learning is complete. |
|
|
|
|
| |
| ▲ | kyleee 19 hours ago | parent | prev [-] | | Partly it seems to be less taxing for the human delivering the same amount of work. I find I can chat with Claude, etc and work more. Which is a double edged sword obviously when it comes to work/life balance etc. But also I am less mentally exhausted from day job and able to enjoy programming and side projects again. | | |
| ▲ | nicoburns 18 hours ago | parent [-] | | I guess each to their own? I can easily end up coding for 16 hours straight (having a great time) if I'm not careful. I can't imagine I'd have as much patience with an AI. | | |
| ▲ | KerrAvon 18 hours ago | parent [-] | | I wonder if this is an introvert vs extrovert thing. Chatting with the AI seems like at least as much work as coding to me (introvert). The folks who don't may be extroverts? | | |
| ▲ | dpkirchner 18 hours ago | parent | next [-] | | I don't feel like I need to say too much to the agent to get my work done. I'm pretty dang introverted. I just don't enjoy the work as much as I did when was younger. Now I want to get things done and then spend the day on other more enjoyable (to me) stuff. | |
| ▲ | halfcat 15 hours ago | parent | prev [-] | | There is some line here. I don’t know if it’s introvert/extrovert but here are my observations. I’ve noticed colleagues who enjoy Claude code are more interested in “just ship it!” (and anecdotally are more extroverted than myself). I find Claude code to be oddly unsatisfying. Still trying to put my finger on it, but I think it’s that I quickly lose context. Even if I understand the changes CC makes, it’s not the same as wrestling with a problem and hitting roadblocks and overcoming. With CC I have no bearing on whether I’m in an area of code with lots of room for error, or if I’m standing in the edge of a cliff and can’t cross some line in the design. I’m way more concerned with understanding the design and avoiding future pain than my “ship it” colleagues (and anecdotally am way more introverted). I see what they build and, yes, it’s working, for now, but the table relationships aren’t right and this is going to have to be rebuilt later, except now it’s feeding a downstream report that’s being consumed by the business, so the beta version is now production. But the 20 other things this app touches indirectly weren’t part of the vibe coding context, so the design obviously doesn’t account for that. It could, but of course the “ship it” folks aren’t the ones that are going to build out lengthy requirements and scopes of work and document how a dozen systems relate to and interact with each other. I guess I’m seeing that the speed limit of quality is still the speed of my understanding, and (maybe more importantly) that my weaponizing of my own obsession only works when I’m wrestling and overcoming, not just generating code as fast as possible. I do wonder about the weaponized obsession. People will draw or play music obsessively, something about the intrinsic motivation of mastery, and having AI create the same drawing, or music, isn’t the same in terms of interest or engagement. |
|
|
|
| |
| ▲ | MangoCoffee 19 hours ago | parent | prev | next [-] | | I've been vibe coding a couple of personal projects. I've found that test-driven development fits very well with vibe coding, and it's just as you said break up the problem into small, testable chunks, get the AI to write unit tests first, and then implement the actual code | | |
| ▲ | yodsanklai 19 hours ago | parent | next [-] | | Actually, all good engineering principles which reduce cognitive load for humans work for AI as well. | | |
| ▲ | BoiledCabbage 18 hours ago | parent | next [-] | | This is what's so funny about this. In some alternative universe I hope that LLMs never get any better. Because they force so much of good things. They are the single closest thing we've ever had to objective evaluation on if an engineering practice is better or worse. Simply because just about every single engineering practice that I see that makes coding agents work well also makes humans work well. And so many of these circular debates and other best practices (TDD, static typing, keeping todo lists, working in smaller pieces, testing independently before testing together, clearly defined codebase practices, ...) have all been settled in my mind. The most controversial take, and the one I dislike but may reluctantly have to agree with is "Is it better for a business to use a popular language less suited for the task than a less popular language more suited for it." While obviously it's a sliding scale, coding agents clearly weight in on one side of this debate... as little as I like seeing it. | | |
| ▲ | shortstuffsushi 15 hours ago | parent | next [-] | | While a lot of these ideas are touted as "good for the org," in the case of LLMs, it's more like guard rails against something that can't reason things out. That doesn't mean that the practices are bad, but I would much prefer that these LLMs (or some better mechanism) everyone is being pushed to use could actual reason, remember, and improve, so that this sort of guarding wouldn't be a requirement for correct code. | | |
| ▲ | kaffekaka 13 hours ago | parent [-] | | The things GP listed are fundamentally good practices. If LLMs get so good they don't need even these guardrails, ok great but that is a long way off, and until then I am really happy if the outcome of AI assisted coding is that we humans get better at using these ideas for ourselves. |
| |
| ▲ | kaffekaka 13 hours ago | parent | prev [-] | | Well put, I like this perspective. |
| |
| ▲ | colordrops 19 hours ago | parent | prev [-] | | This is the big secret. Keep code modular, small, single purpose, encapsulated, and it works great with vibe coding. I want to write a protocol/meta language similar to the markdown docs that Claude et al create that is per module, and defines behavior, so you actually program and compose modules with well defined interfaces in natural language. I'm surprised someone hasn't done it already. | | |
| ▲ | adastra22 18 hours ago | parent | next [-] | | My set of Claude agent files have an explicit set of interface definitions. Is that what you’re talking about? | | | |
| ▲ | drzaiusx11 18 hours ago | parent | prev [-] | | Isn't what you're describing exactly what Kiro aims to solve? | | |
|
| |
| ▲ | alexsmirnov 12 hours ago | parent | prev | next [-] | | TDD is exactly that I unable to get from AI tools. Probably, because training sets always have both code and tests. I tried multiply models from all major providers, and all failed to create tests without seen the code. One workflow that helps is to create dirty implementation and generate tests for it. Then throw away the first code and use different model for final implementation. The best way is to create tests yourself, and block any attempts to modify them | |
| ▲ | MarkMarine 17 hours ago | parent | prev [-] | | Works great until it’s stuck and it starts just refactoring the tests to say true == true and calling it a day. I want the inverse of black box testing, like the inside of the box has the model in it with the code and it’s not allowed to reach outside the box and change the grades. Then I can just do the Ralph Wiggum as a software engineer loop to get over the reward hacking tendencies | | |
| ▲ | 8n4vidtmkvmk 13 hours ago | parent [-] | | Don't let it touch the test file then? I usually give context to the LLM about what it's allowed to touch. I don't do big sweeping changes though. Don't trust LLM for that. For small, focused changes its great |
|
| |
| ▲ | com2kid 15 hours ago | parent | prev | next [-] | | > 1) Don't ask for large / complex change. Ask for a plan but ask it to implement the plan in small steps and ask the model to test each step before starting the next. I asked Claude Code to read a variable from a .env file. It proceeded to write a .env parser from scratch. I then asked it to just use Node's built in .env file parsing.... This was the 2nd time in the same session that it wrote a .env file parser from scratch. :/ Claude Code is amazing, but it'll goes off and does stupid even for simple requests. | | | |
| ▲ | jason_zig 20 hours ago | parent | prev | next [-] | | I've seen people post this same advice and I agree with you that it works but you would think they would absorb this common strategy and integrate it as part of the underlying product at this point... | | |
| ▲ | noosphr 20 hours ago | parent | next [-] | | The people who build the models don't understand how to use the models. It's like asking people who design CPUs to build data-centers. I've interviewed with three tier one AI labs and _no-one_ I talked to had any idea where the business value of their models came in. Meanwhile Chinese labs are releasing open source models that do what you need. At this point I've build local agentic tools that are better than anything Claude and OAI have as paid offerings, including the $2,000 tier. Of course they cost between a few dollars to a few hundred dollars per query so until hardware gets better they will stay happily behind corporate moats and be used by the people blessed to burn money like paper. | | |
| ▲ | criemen 18 hours ago | parent | next [-] | | > The people who build the models don't understand how to use the models. It's like asking people who design CPUs to build data-centers. This doesn't match the sentiment on hackernews and elsewhere that claude code is the superior agentic coding tool, as it's developed by one of the AI labs, instead of a developer tool company. | | |
| ▲ | noosphr 17 hours ago | parent [-] | | Claude code is babies first agentic tool. You don't see better ones from code tooling companies because the economics don't work out. No one is going to pay $1,000 for a two line change on a 500,000k line code base after waiting four hours. LLMs today the equivalent of a 4bit ALU without memory being sold as a fully functional personal computer. And like ALUs today, you will need _thousands_ of LLMs to get anything useful done, also like ALUs in 1950 we're a long way off from a personal computer being possible. |
| |
| ▲ | Barbing 19 hours ago | parent | prev [-] | | Very interesting. And plausible. Doesn't specifically seem to jive with the claim Anthropic made where they were worried about Claude Code being their secret sauce, leaving them unsure whether to publicly release it. (I know some skeptical about that claim.) |
| |
| ▲ | nostrademons 19 hours ago | parent | prev | next [-] | | A lot of it is integrated into the product at this point. If you have a particularly tricky bug, you can just tell Claude "I have this bug. I expected output 'foo' and got output 'bar'. What went wrong?" It will inspect the code and sometimes suggest a fix. If you run it and it still doesn't work, you can say "Nope, still not working", and Claude will add debug output to the whole program, tell you to run it again, and paste the debug output back into the console. Then it will use your example to write tests, and run against them. | |
| ▲ | tombot 20 hours ago | parent | prev [-] | | Claude Code at least now lets you use its best model for planning mode and its cheapest model for coding mode. | | |
| |
| ▲ | ants_everywhere 17 hours ago | parent | prev | next [-] | | IMO by far the best improvement would be to make it easier for the agent to force the agent to use a success criterion. Right now it's not easy prompting claude code (for example) to keep fixing until a test suite passes. It always does some fixed amount of work until it feels it's most of the way there and stops. So I have to babysit to keep telling it that yes I really mean for it to make the tests pass. | |
| ▲ | MikeTheGreat 20 hours ago | parent | prev | next [-] | | Genuine question: What do you mean by " ask it to implement the plan in small steps"? One option is to write "Please implement this change in small steps?" more-or-less exactly Another option is to figure out the steps and then ask it "Please figure this out in small steps. The first step is to add code to the parser so that it handles the first new XML element I'm interested in, please do this by making the change X, we'll get to Y and Z later" I'm sure there's other options, too. | | |
| ▲ | Benjammer 20 hours ago | parent | next [-] | | My method is that I work together with the LLM to figure out the step-by-step plan. I give an outline of what I want to do, and give some breadcrumbs for any relevant existing files that are related in some way, ask it to figure out context for my change and to write up a summary of the full scope of the change we're making, including an index of file paths to all relevant files with a very concise blurb about what each file does/contains, and then also to produce a step-by-step plan at the end. I generally always have to tell it to NOT think about this like a traditional engineering team plan, this is a senior engineer and LLM code agent working together, think only about technical architecture, otherwise you get "phase 1 (1-2 weeks), phase 2 (2-4 weeks), step a (4-8 hours)" sort of nonsense timelines in your plan. Then I review the steps myself to make sure they are coherent and make sense, and I poke and prod the LLM to fix anything that seems weird, either fixing context or directions or whatever. Then I feed the entire document to another clean context window (or two or three) and ask it to "evaluate this plan for cohesiveness and coherency, tell me if it's ready for engineering or if there's anything underspecified or unclear" and iterate on that like 1-3 times until I run a fresh context window and it says "This plan looks great, it's well crafted, organized, etc...." and doesn't give feedback. Then I go to a fresh context window and tell it "Review the document @MY_PLAN.md thoroughly and begin implementation of step 1, stop after step 1 before doing step 2" and I start working through the steps with it. | | |
| ▲ | lkjdsklf 19 hours ago | parent [-] | | The problem is, by the time you’ve gone through the process of making a granular plan and all that, you’ve lost all productivity gains of using the agent. As an engineer, especially as you get more experience, you can kind of visualize the plan for a change very quickly and flesh out the next step while implementing the current step All you have really accomplished with the kind of process described is make the worlds least precise, most verbose programming language | | |
| ▲ | Benjammer 17 hours ago | parent | next [-] | | I'm not sure how much experience you have, I'm not trying to make assumptions, but I've been working in software over 15 years. The exact skill you mentioned - can visualize the plan for a change quickly - is what makes my LLM usage so powerful, imo. I can say the right precise wording in my prompt to guide it to a good plan very quickly. As the other commenter mentioned, the entire above process only takes something like 30-120 minutes depending on scope, and then I can generate code in a few minutes that would take 2-6 weeks to write myself, working 8 hr days. Then, it takes something like 0.5-1.5 days to work out all the bugs and clean up the weird AI quirks and maybe have the LLM write some playwright tests or whatever testing framework you use for integration tests to verify it's own work. So yes, it takes significant time to plan things well for good results, and yes the results are often sloppy in some parts and have weird quirks that no human engineer would make on purpose, but if you stick to working on prompt/context engineering and getting better and faster at the above process, the key unlock is not that it just does the same coding for you, with it generating the code instead. It's that you can work as a solo developer at the abstraction level of a small startup company. I can design and implement an enterprise grade SSO auth system over a weekend that integrates with Okta and passes security testing. I can take a library written in one language and fully re-implement it in another language in a matter of hours. I recently took the native libraries for Android and iOS for a fairly large, non-trivial SDK, and had Claude build me a React Native wrapper library with native modules that integrates both natives libraries and presents a clean, unified interface and typescript types to the react native layer. This took me about two days, plus one more for validation testing. I have never done this before. I have no idea how "Nitro Modules" works, or how to configure a react native library from scratch. But given the immense scaffolding abilities of LLMs, plus my debugging/hacking skills, I can get to a really confident place, really quickly and ship production code at work with this process, regularly. | |
| ▲ | adastra22 18 hours ago | parent | prev [-] | | It takes maybe 30min and then it can go off and generate code that would take literal weeks for me to write. There are still huge productivity gains being had. | | |
| ▲ | lkjdsklf 16 hours ago | parent [-] | | That has not been my experience at all. It takes 30-40 minutes to generate a plan and it generates code that would have taken 20-30 minutes to write. When it’s generating “weeks” worth of code, it inevitably goes off the rails and the crap you get goes in the garbage. This isn’t to say agents don’t have their uses, but i have not seen this specific problem actually work. They’re great for refactoring (usually) and crapping out proof of concepts and debugging specific problems. It’s also great for exploring a new code base where you have little prior knowledge. It makes sense that it sucks at generating large amounts of code that fits cohesively into the project. The context is too small. My code base is millions of lines of code. My brain has a shitload more of that in context than any of the models. So they have to guess and check and end up incorrect and poor and i don’t. I know which abstractions exist that i can use. It doesn’t. Sometimes it guesses right. Often Times it doesn’t. And once it’s wrong, it’s fucked for the entire rest of the session so you just have to start over | | |
| ▲ | adastra22 13 hours ago | parent [-] | | Works for me. Not vanilla Claude code though- you need to put some work into generating slash commands and workflows that keep it on task and catch the bad stuff. Take this for example: https://www.reddit.com/r/ClaudeAI/comments/1m7zlot/how_planm... This trick is just the basic stuff, but it works really well. You can add on and customize from there. I have a “/task” slash command that will run a full development cycle with agents generating code, many more (12-20) agent critics analyzing the unstaged work, all orchestrated by a planning agent that breaks the complex task into small atomic steps. The first stage of this project (generating the plan) is interactive. It can then go off and make 10kLOC code spread over a dozen commits and the quality is good enough to ship, most of the time. If it goes off the rails, keep the plan document but nuke the commits and restart. On the Claude MAX plan this costs nothing. This is how I do all my development now. I spend my time diagnosing agent failures and fixing my workflows, not guiding the agent anymore (other than the initial plan document). I still review every line of code before pushing changes. |
|
|
|
| |
| ▲ | conception 19 hours ago | parent | prev | next [-] | | I tell it to generate a todo.md file with hyper atomic todos each requiring 20 loc or less. Then have it go through that. If the change is too big, generate phases (5-25) and then do the todos for each phase. That plus some sort of reference docs/high level plan keeps it going along all right. | |
| ▲ | ants_everywhere 17 hours ago | parent | prev [-] | | What I do is a step is roughly a reviewable commit. So I'll say something like "evaluate the URL fetcher library for best practices, security, performance, and test coverage. Write this up in a markdown file. Add a design for single flighting and retry policy. Break this down into steps so simple even the dumbest LLM won't get confused. Then I clear the context window and spawn workers to do the implementation. |
| |
| ▲ | adastra22 19 hours ago | parent | prev | next [-] | | This is why the jobs market for new grads and early career folks has dried up. A seasoned developer knows that this is how you manage work in general, and just treats the AI like they would a junior developer—and gets good results. | | |
| ▲ | CuriouslyC 19 hours ago | parent [-] | | Why bother handing stuff to a junior when an agent will do it faster while asking fewer questions, and even if the first draft code isn't amazing, you can just quality gate with an LLM reviewer that has been instructed to be brutal and do a manual pass when the code gets by the LLM reviewer. | | |
| ▲ | LtWorf 19 hours ago | parent [-] | | Because juniors learn while LLMs don't and you must explain the same thing over and over forever. | | |
| ▲ | adastra22 18 hours ago | parent [-] | | If you are explaining things more than once, you are doing it wrong. Which is not on you as the tools currently suck big time. But it is quite possible to have LLM agents “learn” by intelligently matching context (including historical lessons learned) to conversation. |
|
|
| |
| ▲ | rvnx 20 hours ago | parent | prev | next [-] | | Your tips are perfect. Most users will just give a vague tasks like: "write a clone of Steam" or "create a rocket" and then they blame Claude Code. If you want AI to code for you, you have to decompose your problem like a product owner would do. You can get helped by AI as well, but you should have a plan and specifications. Once your plan is ready, you have to decompose the problem into different modules, then make sure each modules are tested. The issue is often with the user, not the tool, as they have to learn how to use the tool first. | | |
| ▲ | wordofx 19 hours ago | parent [-] | | > Most users will just give a vague tasks like: "write a clone of Steam" or "create a rocket" and then they blame Claude Code. This seems like half of HN with how much HN hates AI. Those who hate it or say it’s not useful to them seem to be fighting against it and not wanting to learn how to use it. I still haven’t seen good examples of it not working even with obscure languages or proprietary stuff. | | |
| ▲ | drzaiusx11 18 hours ago | parent | next [-] | | Anyone who has mentored as part of a junior engineer internship program AND has attempted to use current gen ai tooling will notice the parallels immediately. There are key differences though that are worth highlighting. The main difference is that with the current batch of genai tools, the AI's context resets after use, whereas a (good) intern truly learns from prior behavior. Additionally, as you point out, the language and frameworks need to be part of the training set since AI isn't really "learning" it's just prepolulating a context window for its pre-existing knowledge (token prediction), so ymmv depending on hidden variables from the secret (to you, the consumers) training data and weights. I use Ruby primarily these days, which is solidly in the "boring tech" camp and most AIs fail to produce useful output that isn't rails boilerplate. If I did all my IC contributions via directed intern commits I'd leave the industry out of frustration. Using only AI outputs for producing code changes would be akin to torture (personally.) Edit: To clarify I'm not against AI use, I'm just stating that with the current generation of tools it is a pretty lackluster experience when it comes to net new code generation. It excells at one off throwaway scripts and making large tedious redactors less drudgerly. I wouldn't pivot to it being my primary method of code generation until some of the more blatant productiviy losses are addressed. | |
| ▲ | hn_acc1 17 hours ago | parent | prev | next [-] | | When it's best suggestion (for inline typing) is bring back a one-off experiment in a different git worktree from 3 months ago that I only needed that one time.. it does make me wonder. Now, it's not always useless. It's GREAT at adding debugging output and knowing which variables I just added and thus want to add to the debugging output. And that does save me time. And it does surprise me sometimes with how well it picks up on my thinking and makes a good suggestion. But I can honestly only accept maybe 15-20% of the suggestions it makes - the rest are often totally different from what I'm working on / trying to do. And it's C++. But we have a very custom library to do user-space context switching, and everything is built on that. | |
| ▲ | LtWorf 19 hours ago | parent | prev | next [-] | | If you have to iterate 10 times, that is "not working", since it already wasted way more time than doing it manually to begin with. | |
| ▲ | halfcat 15 hours ago | parent | prev [-] | | > not wanting to learn how to use it I kind of feel this. I’ll code for days and forget to eat or shower. I love it. Using Claude code is oddly unsatisfying to me. Probably a different skillset, one that doesn’t hit my obsessive tendencies for whatever reason. I could see being obsessed with some future flavor of it, and I think it would be some change with the interface, something more visual (gamified?). Not low-code per se, but some kind of mashup of current functionality with graph database visualization (not just node force graphs, something more functional but more ergonomic). I haven’t seen anything that does this well, yet. |
|
| |
| ▲ | ccorcos 13 hours ago | parent | prev | next [-] | | Seems like this logic could all be represented in Claude.md and some agents. Has anyone done this? I’d love to just import that into my project because I’m using some of these tactics but it’s fairly manual and tedious. | |
| ▲ | biggc 14 hours ago | parent | prev | next [-] | | Thin sounds a lot like making a change yourself. | | |
| ▲ | therein 14 hours ago | parent [-] | | It appeals to some people because they'd rather manage a bot and get it to do something they told it to do rather than do it themselves. |
| |
| ▲ | rmonvfer 19 hours ago | parent | prev | next [-] | | I’d like to add: keep some kind of development documentation where you describe in detail the patterns and architecture of your application and it’s components. I’ve seen incredible improvements just by doing this and using precise prompting to get Claude to implement full services by itself, tests included. Of course it requires manual correction later but just telling Claude to check the development documentation before starting work on a feature prevents most hallucinations (that and telling it to use the Context7 MCP for external documentation), at least in my experience. The downside to this is that 30% of your context window will be filled with documentation but hey, at least it won’t hallucinate API methods or completely forget that it shouldn’t reimplement something. Just my 2 cents. | |
| ▲ | salty_frog 17 hours ago | parent | prev | next [-] | | This is my algorithm for wetware llms. | |
| ▲ | whateveracct 13 hours ago | parent | prev | next [-] | | that sounds like just coding it yourself with extra steps | | |
| ▲ | baq 11 hours ago | parent [-] | | Exactly, then you launch ten copies of yourself and write code to manage that yourself, maybe. |
| |
| ▲ | paulcole 20 hours ago | parent | prev | next [-] | | > Ask for a plan but ask it to implement the plan in small steps and ask the model to test each step before starting the next. Tried this on a developer I worked with once and he just scoffed at me and pushed to prod on a Friday. | | | |
| ▲ | renegat0x0 11 hours ago | parent | prev [-] | | Huh, I thought that AI was made to be magic. Click and it generates code. Turns out it is like magic, but you are an apprentice, and still have to learn how to wield it. | | |
|
|
| ▲ | ale 19 hours ago | parent | prev | next [-] |
| It’s about time these types of articles actually include the types of tasks being “orchestrated” (as the author writes) that aren’t just plain refactoring chores or React boilerplate. Sanity has quite a backlog of long-requested features and the message here is that these agents are supposedly parallelizing a lot of the work. What kind of staff engineer has “80% of their code” written by a “junior developer who doesn't learn“? |
| |
| ▲ | mindwok 14 hours ago | parent | next [-] | | IMO “junior developer who doesn't learn“ is not quite right. Claude is more like an senior, highly academic engineer who has read all the literature but hasn't ever written any code. Amazing encyclopaedic knowledge, zero taste. I've been building commercial codebases with Claude for the last few months and almost all of my input is on taste and what defines success. The code itself is basically disposable. | | |
| ▲ | all2 13 hours ago | parent | next [-] | | > The code itself is basically disposable. I'm finding this is the case for my work as well. The spec is the secret sauce, the code (and its many drafts) are disposable. Eventually I land on something serviceable, but until I do, I will easily drop a draft and start on a new one with a spec that is a little more refined. | | |
| ▲ | bjornsing 6 hours ago | parent | next [-] | | So how do you best store and iterate on the spec? One way I guess would be to work on a branch an modify Claude.md to reflect what the branch is for. Is that a good approach? Are there others? | |
| ▲ | dotancohen 9 hours ago | parent | prev [-] | | I just like to add that the database design is the real secret sauce, important even more than external APIs in my opinion. | | |
| ▲ | all2 an hour ago | parent | next [-] | | This is something that I've stumbled into as well. DB models AND dataflow. Getting both of those well spec'd makes things a lot easier. | |
| ▲ | mattmanser 6 hours ago | parent | prev [-] | | Well, not DB design really, you can achieve the same thing by defining your POCOs well. I switched entirely to code-first design years ago. If you haven't worked with a good ORM, you're really missing out, though I admit there was quite a bit of friction at first. | | |
| ▲ | dotancohen 6 hours ago | parent [-] | | No, I really am talking about how the database is organised. Tables representing objects, normalisation, etc. Whether or not it is accessed through the application with an ORM. |
|
|
| |
| ▲ | baq 11 hours ago | parent | prev | next [-] | | > The code itself is basically disposable. This is key. We’re in mass production of software era. It’s easier and cheaper to replace a broken thing/part than to fix it, things being some units of code. | |
| ▲ | globular-toast 11 hours ago | parent | prev | next [-] | | If the code is disposable then where do all the rules and reasoning etc live apart from in your head? | | |
| ▲ | dotancohen 9 hours ago | parent [-] | | In the spec. | | |
| ▲ | globular-toast 4 hours ago | parent [-] | | Hmm... my code is the spec. It just happens to be executable. Is writing a precise spec in English easier than in a programming language? | | |
| ▲ | dotancohen 3 hours ago | parent [-] | | The spec contains ambiguities and the code contains bugs. Clarifying ambiguities in the spec with stakeholders, allows one to reduce bugs in the code. | | |
| ▲ | caseyohara an hour ago | parent [-] | | If you repeat this process until all ambiguities in the spec are eliminated, aren't you essentially left with code? Or at least something that looks more like code than plain English? |
|
|
|
| |
| ▲ | sanitycheck 8 hours ago | parent | prev [-] | | Eh, Claude is like a magical spaniel that can read and write very quickly, with early-stage alzheimers, on amphetamines. Yes it knows a lot and can regurgitate things and create plausible code (if I have it run builds and fix errors every time it changes a file - which of course eats tokens) but having absolutely no understanding of how time or space works leads to 90% of its great ideas being nonsensical for UI tasks. Everything is needing very careful guidance and supervision otherwise it decides to do something different instead. For back end stuff, maybe it's better. I'm on the fence regarding overall utility but $20/month could almost be worth it for a tool that can add a ton of debug logging in seconds, some months. |
| |
| ▲ | vincent_builds 3 hours ago | parent | prev | next [-] | | Hi Ale, author here. Skepticism is understandable, but trust me, I'm not just writing React boilerplate or refactoring. I find it difficult to include examples because a lot of my work is boring backend work on existing closed-source applications. It's hard to share, but I'll give it a go with a few examples :) ---- First example: Our quota detection system (shipped last month) handles configurable threshold detection across billing metrics. The business logic is non-trivial, distinguishing counter vs gauge metrics, handling multiple consumers, and efficient SQL queries across time windows. Claude's evolution:
- First pass: Completely wrong approach (DB triggers)
- Second pass: Right direction, wrong abstraction
- Third pass: Working implementation, we could iterate on ----
Second example: Sentry monitoring wrapper for cron jobs, a reusable component to help us observe our cronjob usage Claude's evolution:
- First pass: Hard-coded the integration into each cron job, a maintainability nightmare.
- Second pass: Using a wrapper, but the config is all wrong
- Third pass: Again, OK implementation, we can iterate on it ---- The "80%" isn't about line count; it's about Claude handling the exploration space while I focus on architectural decisions. I still own every line that ships, but I'm reviewing and directing rather than typing. This isn't writing boilerplate, it's core billing infrastructure. The difference is that Claude is treated like a very fast junior who needs clear boundaries rather than expecting senior-level architecture decisions. | |
| ▲ | bsder 14 hours ago | parent | prev | next [-] | | We have all these superpowered AI vibe coders, and yet open source projects still have vast backlogs of open issues. Things that make you go "Hmmmmmm." | | |
| ▲ | baq 11 hours ago | parent | next [-] | | You have to pay a recurring subscription to access the worthwhile tools in a meaningful capacity. This goes directly against why retail users of open source software, some of whom are also developers of it, actually use it - and you can tell a lot of developers do it because they find coding fun. It’s a very different discussion when you’re building a product to sell. | |
| ▲ | TiredOfLife 4 hours ago | parent | prev [-] | | The projects that have those backlogs dont allow ai made code |
| |
| ▲ | willtemperley 10 hours ago | parent | prev | next [-] | | Yes exactly. Show us the code and we can evaluate the advice. Otherwise it’s just an advertisement. | |
| ▲ | bakugo 18 hours ago | parent | prev | next [-] | | Actually providing examples of real tasks given to the AI and the subsequent results would break the illusion and give people opportunities to question the hype. Can't have that. We'll just keep getting submission after submission talking about how amazing Claude Code is with zero real world examples. | | |
| ▲ | vincent_builds 3 hours ago | parent | next [-] | | Author here. It's fair enough. I didn't give real-world examples; that's partially down to what I typically work on. I usually work in brownfield backend logic in closed-source applications that don't showcase well. Two recent production features: 1. *Quota crossing detection system*
- Complex business logic for billing infrastructure
- Detects when usage crosses configurable thresholds across multiple metric types
- Time: 4 days parallel work vs ~10 days focused without AI The 3-attempt pattern was clear here:
- Attempt 1: DB trigger approach - wouldn't scale for our requirements
- Attempt 2: SQL detection but wrong interfaces, misunderstood counter vs gauge metrics
- Attempt 3: Correct abstraction after explaining how values are stored and consumed
2. *Sentry monitoring wrapper for cron jobs*
- Reusable component wrapping all cron jobs with monitoring
- Time: 1 day parallel vs 2 days focusedNothing glamorous, but they are real-world examples of changes I've deployed to production quicker because of Claude. | |
| ▲ | johnfn 17 hours ago | parent | prev [-] | | Really, zero real world examples? What about this? https://news.ycombinator.com/item?id=44159166 |
| |
| ▲ | dingnuts 16 hours ago | parent | prev [-] | | the kind of engineer who has been Salesified to the point that they write such drivel as "these learnings" instead of "lessons" in an article that allegedly has a technical audience. it's funny because as I have gotten better as a dev I've gone backwards through his progression. when I was less experienced I relied on Google; now, just read the docs | | |
| ▲ | juped 12 hours ago | parent [-] | | Yeah, the trusty manual becomes #1 at around the same time as one starts actually engineering. You've entered the target audience! | | |
| ▲ | skydhash 6 hours ago | parent [-] | | These days, I often just go straight to the source (when available) to clear some confusion about the library/software behavior. It can be a quite nice 10 mn break. |
|
|
|
|
| ▲ | asdev 18 hours ago | parent | prev | next [-] |
| Guy said a whole lot of nothing. Said he's improved productivity, but also said AI falls short in all the common ways people have noticed. Also guarantee no one is building core functionality delegating to Claude Code. |
| |
| ▲ | aronowb14 17 hours ago | parent | next [-] | | Agreed. I think this Anthropic article is a realistic take on what’s possible (focus on prototyping) https://www-cdn.anthropic.com/58284b19e702b49db9302d5b6f135a... | |
| ▲ | muzani 11 hours ago | parent | prev [-] | | This whole article is a really odd take. Maybe it's upvoted so much because it's from a "staff engineer". Most people are getting much better rates than 95% failure and almost nobody is spending over $1000 a month. If it was anyone else saying the same thing, they'd be laughed out of the room. |
|
|
| ▲ | jpollock 17 hours ago | parent | prev | next [-] |
| Avoiding the boilerplate is part of the job as a software developer. Abstracting the boilerplate is how you make things easier for future you. Giving it to an AI to generate just makes the boilerplate more of a problem when there's a change that needs to be made to _all_ the instances of it. Even worse if the boilerplate isn't consistent between copies in the codebase. |
| |
| ▲ | conradfr 13 hours ago | parent | next [-] | | What's weird for me is that most frameworks and tools usually include generators for boilerplate code anyway so not sure why wasting tokens/money on that is valuable. | |
| ▲ | globular-toast 11 hours ago | parent | prev [-] | | Yeah. I'm increasingly starting to think this LLM stuff is simply the first time many programmers have been able to not write boilerplate. They didn't learn to build abstractions so essentially live on whatever platform someone else has built for them. AI is simply that new platform. I'm lazy af. I have not been manually typing up boilerplate for the past 15 years. I use computers to do repetitive tasks. LLMs are good at some of them, but it's just another tool in the box for me. For some it seems like their first and only one. What I can't understand is how people are ok with all that typing that you still have to do just going into /dev/null while only some translation of what you wrote ends up in the codebase. That one makes me even less likely to want to type. At least if I'm writing source code I know it's going into the repository directly. | | |
| ▲ | skydhash 6 hours ago | parent [-] | | The one thing I’m always suspicious about is the actual mastery (programming language and computer usage) involved. You never see anyon describe the context of what they’ve been doing pre-llm. |
|
|
|
| ▲ | alessandru 12 minutes ago | parent | prev | next [-] |
| did this guy read that other paper about ai usage making people stupid? how long until he falls from staff engineer back down to senior or something less? |
|
| ▲ | resonious 19 hours ago | parent | prev | next [-] |
| Interesting that this guy uses AI for the initial implementation. I do the opposite. I always build the foundation. That way I know how things work fundamentally. Then I ask agents to do boilerplate tasks. They're really good at following suit, but very bad at architecture. |
| |
| ▲ | f311a 18 hours ago | parent [-] | | Yeah, LLMs are pretty bad at planning maintainable architecture. They don’t refactor it when code is evolving and probably can’t do it due to context limitations. |
|
|
| ▲ | albingroen 19 hours ago | parent | prev | next [-] |
| So we’re supposed to start paying $1k-$1,5k on top of already crazy salaries just to maybe get a productivity boost on trivial to semi trivial issues? I know my boss would not be keen on that at least. |
| |
| ▲ | Jcampuzano2 14 hours ago | parent | next [-] | | If devs salaries are so crazy its quite the opposite. NOT investing 1-1.5k/mo to improve their productivity by a measurable amount would quite literally be just plain stupid and I would question your boss ability to think critically. Not to mention - while I know many don't like it, they may be able to achieve enough of a productivity boost to not require hiring as many of those crazy salaried devs. Its literally a no-brainer. Thinking about it from just the individual cost factor is too simplified a view. | |
| ▲ | 15155 17 hours ago | parent | prev | next [-] | | Hardware companies routinely license individual EDA tool seats that cost more than numerous developer salaries - $1k/year is nothing if it improves productivity by any measurable amount. | | |
| ▲ | saulpw 15 hours ago | parent [-] | | The OP was saying it's $1k/mo. That's a 5-10% raise, which is a bit more than nothing. | | |
| ▲ | Jcampuzano2 14 hours ago | parent | next [-] | | There are many companies that regularly spend much more than that on other software related licenses that devs need to do their job productively. If the average US salaried developer is 10-15% more productive for just 1k more a month it is literally a no-brainer for companies to invest in that. Of course on the other side of the coin there are many companies that are very stingy with paying for literally anything for their employees that could measurably improve productivity, and hamper their ability to be productive by intentionally paying for cheap shitty tools. They will just lose out. | |
| ▲ | baq 11 hours ago | parent | prev | next [-] | | It isn’t a raise. Salaries are on a very different budget. Money is fungible etc but don’t tell accounting. | |
| ▲ | ryukoposting 7 hours ago | parent | prev [-] | | Parent comment isn't joking. Good simulators for RF stuff can be well over $5k per month. |
|
| |
| ▲ | AnotherGoodName 19 hours ago | parent | prev | next [-] | | I can't use $20 of credit (gpt-5 thinking via intellij's pro AI subscription) a month right now with plenty of usage so I'm surprised at the $1k figure. Is Claude that much more expensive? (a quick Google suggests yes actually). Having said the above some level of AI spending is the new reality. Your workplace pays for internet right? Probably a really expensive fast corporate grade connection? Well they now also need to pay for an AI subscription. That's just the current reality. | | |
| ▲ | everforward 18 hours ago | parent | next [-] | | I don't know what Intellij's AI integration is like, but my brief Claude Code experience is that it really chews through tokens. I think it's a combination of putting a lot of background info into the context, along with a lot of "planning" sort of queries that are fairly invisible to the end user but help with building that background for the ultimate query. Aider felt similar when I tried it in architect mode; my prompt would be very short and then I'd chew through thousands of tokens while it planned and thought and found relevant code snippets and etc. | |
| ▲ | billllll 15 hours ago | parent | prev | next [-] | | Paying for Internet is not a great analogy imo. If you don't pay $1k/mo for Internet, you literally can't work. What happens if you don't pay $1k/mo for Claude? Do you get an appreciable drop in productivity and output? Genuinely asking. | |
| ▲ | EE84M3i 13 hours ago | parent | prev | next [-] | | Anthropic and OpenAI both have a high SSO/enterprise tier tax. | |
| ▲ | oblio 19 hours ago | parent | prev [-] | | The fast corporate internet connection is probably 1000$ for 100 developers or more... |
| |
| ▲ | albingroen 19 hours ago | parent | prev | next [-] | | And remember. This is on subsadised prices. | | |
| ▲ | dajonker 12 hours ago | parent [-] | | Exactly, makes it feel almost like an advertorial for Anthropic, who likely need most customers to pay 1000 bucks a month to break even. |
| |
| ▲ | sdesol 18 hours ago | parent | prev | next [-] | | It will certainly be interesting to see how businesses evolve in the upcoming years. What is written in stone is, you (employee) will be measured and I am curious to see what developers will be measured by in the future. Will you be at a greater risk of layoffs/lack of promotions/etc. if you spend more on AI? How do you as a developer prove that it is you and not the LLM that should be praised? | |
| ▲ | astrange 15 hours ago | parent | prev [-] | | The high salaries make productivity improvements even more important. | | |
| ▲ | beefnugs 12 hours ago | parent [-] | | If the world wasnt a garbage hole of mis-alignment and planning : The people seeing positivity out of this stuff would be demanding raises immediately, both AI experts and seniors should be demanding the company pay and train juniors as part of their loyalty commitment to the company |
|
|
|
| ▲ | jbs789 6 hours ago | parent | prev | next [-] |
| I often find that Claude introduces a level of complexity that is not necessary in my cases. I suspect this is a function of the training data (large repos or novel solutions). That said, I do sometimes find inspiration for new techniques in its answers. I just haven't heard others express the same over-engineering problem and wonder if this is a general observation or only shows up b/c my requests are quite simple. (I have found that prompting it for the simplest or most efficient solution seems to help - sometimes taking 20+ lines down to 2-3, often more understandable.) P.S. I tend to work with data and a web app for processes related to a small business, while not a formally trained developer. |
| |
| ▲ | chamomeal 5 hours ago | parent [-] | | Seems like LLMs really suffer from the "eh I'll just write it myself" mindset. Yesterday on a react app using react-query (library to manage caching and re-fetching of data) claude code wanted to update the cache manually, instead of just using a bit of state that was already in scope in the exact same component! For me, stuff like that is the same weird uncanny valley that you used to see in AI text, and see now in AI video. It just does such inhuman things. A senior developer would NEVER think to manually mutate the cache, because it's such desperate hack. A junior dev wouldn't even realize it's an option. |
|
|
| ▲ | willtemperley 11 hours ago | parent | prev | next [-] |
| Maybe I’m contrarian but I design and write most of my code and let LLMs do the reviews. Why? First I know my problem space better than the LLM. Second, the best way to express coding intention is with code. The models often have excellent suggestions on improvements I wouldn’t have thought of. I suspect the probability of providing a good answer has been increased significantly by narrowing the scope. Another technique is to say “do this like <some good project> does it” but I suspect that might be close to copyright theft. |
|
| ▲ | meerab 19 hours ago | parent | prev | next [-] |
| I have barely written any code since my switch to Claude Code! It's the best thing since sliced bread! Here's what works for me: - Detailed claude.md containing overall information about the project. - Anytime Claude chooses a different route that's not my preferred route - ask my preference to be saved in global memory. - Detailed planning documentation for each feature - Describe high-level functionality. - As I develop the feature, add documentation with database schema, sample records, sample JSON responses, API endpoints used, test scripts. - MCP, MCP, MCP! Playwright is a game changer The more context you give upfront, the less back-and-forth you need. It's been absolutely transformative for my productivity. Thank you Claude Code team! |
| |
| ▲ | bobbylarrybobby an hour ago | parent | next [-] | | What does the playwright MCP accomplish for you? Is it basically a way for Claude to play with your app in the browser without having to write playwright tests? | |
| ▲ | f311a 18 hours ago | parent | prev | next [-] | | What you’re working on? In my industry it fails half of the time and comes up with absolute nonsense. The data just don’t exist for our problems, it can only work when you guide it and ask for a few functions at max. | | |
| ▲ | ryukoposting 7 hours ago | parent | next [-] | | This sounds like my experiences with it. I'm writing embedded firmware in C and Rust. I'd describe further, but Claude seems incompetent at all aspects of this space. | |
| ▲ | meerab 18 hours ago | parent | prev [-] | | I am working on VideoToBe.com - and my stack is NextJS, Postgresql and FastAPI. Claude code is amazing at producing code for this stack.
It does excellent job at outputting ffmpeg, curl commands, linux shell script etc. I have written detailed project plan and feature plan in MarkDown - and Claude has no trouble understanding the instructions. I am curious - what is your usecase? | | |
| ▲ | mattmanser 6 hours ago | parent [-] | | That seems to be a great example of precisely the sort of program an AI would be good at. A small focused product that only does one thing. Mainly gluing together other people's code. It's a polished greenfield project that does one tiny bit of focused functionality. Interestingly, this guy has been making pretty much the same app as you, and live-streamed making it on youtube: https://www.youtube.com/@RayFernando1337 Looks like he's now pivoted to selling access to his discord server for vibe coding tips as I can't find a link to his product. But if we're honest here, it's not going to take a ton of code to make that. All the functionality to do it is well documented. Many people here could make a competitor in a week, without agentic AI, just using AI as a super-charged SO. The limiter pre-AI (aside from AI transcribing it) would have been reading and implementing/debugging all the documentation of the libraries you're using, which AI is great at circumventing. Your product looks really good, and is an excellent example of what vibe coded AI is great at. I hope you're getting good traction. |
|
| |
| ▲ | ethanwillis 18 hours ago | parent | prev | next [-] | | Personally, I give Claude a fully specified program as my prompt so that it gives me back a working program 100% of the time. Really simple workflow! | | |
| ▲ | Zee2 14 hours ago | parent [-] | | Ah, I’ve tried that one, but I must be doing something wrong. I give it a fully specified working program, and often times it gives me back one that only works 50% of the time! |
| |
| ▲ | jazzyjackson 18 hours ago | parent | prev | next [-] | | Does Claude Code provide some kind of "global memory" the llm refers to, or is this just a request you make within the the llm's context window? Just curious hadn't heard the use of the term EDIT: I see, you're asking Claude to modify claude.md to track your preference there, right? https://docs.anthropic.com/en/docs/claude-code/memory | | |
| ▲ | meerab 18 hours ago | parent [-] | | Yes. /init will initialize the project and save initial project information and preference. Ask Claude to update the preference and document the moment you realize that claude has deviated away from the path. |
| |
| ▲ | mierz00 7 hours ago | parent | prev [-] | | How have you been using Playwright MCP? |
|
|
| ▲ | tkgally 18 hours ago | parent | prev | next [-] |
| Anthropic just posted an interview with Boris Cherny, the creator of Claude Code. He also offers some ideas on how to use it. “The future of agentic coding with Claude Code” https://youtu.be/iF9iV4xponk |
|
| ▲ | nikcub 18 hours ago | parent | prev | next [-] |
| > budget for $1000-1500/month for a senior engineer going all-in on AI development. Is this another case of someone using API keys and not knowing about the claude MAX plans? It's $100 or $200 a month, if you're not pure yolo brute-force vibe coding $100 plan works. https://www.anthropic.com/max |
| |
| ▲ | vincent_builds 3 hours ago | parent | next [-] | | Author here, quick clarification on pricing: the $1000-1500/month is for Teams/Enterprise with higher rate limits, not the consumer MAX plans. Consumer MAX ($200/month) works for lighter usage but hits limits quickly with parallel agents and large codebases. For context: that's 1-2% of a senior engineer's fully loaded cost. The ROI is clear if it delivers even 10% productivity gain (we're seeing 2-3x on specific tasks). You're right that many devs can start with MAX plans. The higher tier becomes necessary when running multiple parallel contexts and doing systematic exploration (the "3-attempt pattern" burns tokens fast). I wouldn't be doing it if I didn't think it was value for money. I've always been a cost-conscious engineer who weighs cost/value, and with Claude, I am seeing the return. | |
| ▲ | reissbaker 16 hours ago | parent | prev | next [-] | | Yeah $1k-1.5k seems absurdly high. The $200/month 20x variant of the Max plan covers an insane amount of usage, and the rate limits reset every five hours. Hard to imagine needing it so badly that you're blowing through that rate limit multiple times a day, every day... And if you are, I think switching to per-token payment would probably cost a lot more than $1k. | |
| ▲ | rolls-reus 11 hours ago | parent | prev [-] | | The MAX plan is a consumer plan, it’s not available with Teams or Enterprise. They introduced a premium team plan ($150) with Claude code access but not sure how much usage that bundles. |
|
|
| ▲ | pastage 5 hours ago | parent | prev | next [-] |
| That is 150MWh per month in AI for a staff engineer. If we are doing a straight dollar to kWh conversion, plus/minus an order of magnitude. |
|
| ▲ | nzach 4 hours ago | parent | prev | next [-] |
| One thing that I haven't seen a lot of people talk about is the relatively new model config "Opus Plan Mode: Use Opus 4.1 in plan mode, Sonnet 4 otherwise". In my opinion this should be the default config. Increasing the quality of the plans gives you a much better experience using Claude Code. |
|
| ▲ | RomanPushkin 18 hours ago | parent | prev | next [-] |
| There is one thing I would highly recommend to anyone using Claude or any other agents: logging. I can't emphasize it more, if you have logging you can take the whole log file, dump it into AI, outline the problem and likely you're getting solution or would advance to the next step. Logging is everything. |
|
| ▲ | kbuchanan 20 hours ago | parent | prev | next [-] |
| For me, working mostly in Planning Mode skips much of the initial misfires, and often leads to correct outcomes for the first edit. |
|
| ▲ | namesbc 18 hours ago | parent | prev | next [-] |
| Spending $1500 per-month is a crazy wasteful amount of money |
| |
| ▲ | the_hoffa 11 hours ago | parent [-] | | That's 18k a year, or about equal or cheaper than "outsourcing", minus the tax and legal ramifications. I agree it's wasteful, but from a long-form view of what spending looks like (or at least should/used to look like). Those who see 1.5k/month as "saving" money typically only care about next quarter. As the old adage goes: a thousand dollars saved this month is 100 thousand spent next year. |
|
|
| ▲ | makk 13 hours ago | parent | prev | next [-] |
| I don’t understand the use of MCP described in the post. Claude code can access pretty much all those third party services in the shell, using curl or gh and so on. And in at least one case using MCP can cause trouble: the linear MCP server truncates long issues, in my experience, whereas curling the API does not. What am I missing? |
|
| ▲ | BobbyTables2 18 hours ago | parent | prev | next [-] |
| The author will be in upper management before they know it! |
|
| ▲ | 4 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | axus 19 hours ago | parent | prev | next [-] |
| I like his point about more objectivity and zero ego. You don't have to worry about hurting an AI's feelings or your own when you throw away code. |
| |
| ▲ | awesome_dude 19 hours ago | parent [-] | | But I still find myself needing (strongly) to let Claude know when it's made a breakthrough that would have been hard work on my own. | | |
| ▲ | CharlesW 18 hours ago | parent | next [-] | | Good creators tend to treat their tools with respect, and I can't imagine any reason we shouldn't feel gratitude toward our tools after a particularly satisfying session. Also, there may be selfish reasons to do this as well: (1) "Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance" https://arxiv.org/abs/2402.14531 (2) "Three Things to Know About Prompting LLMs" https://sloanreview.mit.edu/article/three-things-to-know-abo... | |
| ▲ | groby_b 19 hours ago | parent | prev [-] | | Curious: Do you also laud your compiler for particularly good optimizations? | | |
| ▲ | awesome_dude 19 hours ago | parent [-] | | There's a couple of things there 1. I don't see the output of the compiler, as in, all I get is an executable blob. It could be inspected, but I don't think that I ever have in my 20+ year career. Maybe I lie and I've rocked up with a Hex editor once or twice, out of pure curiousity, but I've never got past looking for strings that I recognise. 2. When I use Claude, I am using it to do things that I can do, by hand, myself. I am reviewing the code as I go along, and I know what I want it to do because it's what I would be writing myself if I didn't have Claude (or Gemini for that matter). So, no, I have never congratulated the compiler (or interpreter, linker, assembler, or even the CPU). Finally, I view the AI as a pairing partner, sometimes it's better than me, sometimes it's not, and I have to be "in the game" in order to make sure I don't end up with a vibe coded mess. edit: This is from yesterday (Claude had just fixed a bug for me - all I did was paste the block of code that the bug was in, and say "x behaviour but getting y behaviour instead) perfect, thanks Edit
You're welcome! That was a tricky bug - using rowCount instead of colCount in the index calculation is the kind of subtle error that can be really hard to spot. It's especially sneaky because row 0 worked correctly by accident, making it seem like the logic was mostly right.
Glad we got it sorted out! Your Gaps redeal should now work properly with all the 2s (and other correctly placed cards) staying in their proper positions across all rows. | | |
| ▲ | LtWorf 19 hours ago | parent [-] | | You've got to check the assembly, not the binary, for optimisations… | | |
| ▲ | awesome_dude 18 hours ago | parent [-] | | Yeah - or I could just not care unless I have to (which, in the last 20 plus years, has been... let me think... oh, right... never) |
|
|
|
|
|
|
| ▲ | pseudosavant 15 hours ago | parent | prev | next [-] |
| This has been my experience too. I’m just not quite as far along as the author. Detachment from the code has been excellent for me. Just started a v2 rewrite of something I’d never had done in the past. Mostly because it would have taken me too much time to try it out if I wrote it all by hand. |
|
| ▲ | syspec 17 hours ago | parent | prev | next [-] |
| Does this work for others when working in other domains? When creating a Swift application, I can't imagine creating 20 agents and letting them go to town. Same for the backend of such an application if it's in say, Java+Springboot |
|
| ▲ | jedberg 19 hours ago | parent | prev | next [-] |
| I'd like to share my journey with Claude (not code). I fed Claude a copy of everything I've ever written on Hacker News. Then I asked it to generate an essay that sounds like me. Out of five paragraphs I had to change one sentence. Everything else sounded exactly as I would have written it. It was scary good. |
| |
| ▲ | into_ruin 18 hours ago | parent | next [-] | | I'm doing a project in a codebase I'm not familiar with in a language I don't really know, and Claude Code has been amazing at _explaining_ thing to me. "Who calls this function," "how is this generated," etc. etc. I'm not comfortable using it to generate code for this project, but I can absolutely see using it to generate code for a project I'm familiar with in a language I know well. | |
| ▲ | keeda 14 hours ago | parent | prev [-] | | Reid Hoffman, LinkedIn co-founder, has gone whole hog on that idea and has a literal AI clone of himself, trained on all his writings, videos and audio interviews -- complete with AI-generated deep-fake visuals and cloned voice: https://www.linkedin.com/posts/reidhoffman_can-talking-with-... I've watched a handful of videos with this "digital twin", and I don't know how much post-processing has gone into them, but it is scary accurate. And this was a year+ ago. |
|
|
| ▲ | nh43215rgb 18 hours ago | parent | prev | next [-] |
| $1000-1500/month for ai paid by employer... that's quite nice.
I wonder how much would it cost to run couple of claude code instance to run 24/7 indefinitely. If company's got resources they might as well try that against their issues. |
|
| ▲ | block_dagger 19 hours ago | parent | prev | next [-] |
| The author doesn't make it clear why they switched from Cursor to Claude. Curious about what they can do with Claude that can't be done with Cursor. I use both a lot and find Cursor to be superior for the very large codebases I work in. |
| |
| ▲ | reissbaker 16 hours ago | parent | next [-] | | Pretty much everyone I talk to prefers the opposite, and feels like Claude performs best inside the Claude Code harness and not the Cursor one. But I suppose different strokes for different folks... Personally I'm a Neovim addict, so you can pry TUIs out of my cold dead hands (although I recognize that's not a preference everyone shares). I'm also not purely vibecoding; I just use it to speed up annoying tasks, especially UI work. | |
| ▲ | meerab 17 hours ago | parent | prev | next [-] | | Personal opinion: Claude code is more user friendly than cursor with its CLI like interface. The file modifications are easy to view and it automatically runs psql, cd, ls , grep command. Output of the commands is shown in more user friendly fashion. Agents and MCPs are easy to organized and used. | | |
| ▲ | block_dagger 16 hours ago | parent [-] | | I feel just the opposite. I think Cursor's output is actually in the realm of "beautiful." It's well formatted and shows the user snippets of code and reasoning that helps the user learn. Claude is stuck in a terminal window, so reduced to monospaced bullet lines. Its verbose mode spits out lines of file listings and other context irrelevant to the user. |
| |
| ▲ | RomanPushkin 18 hours ago | parent | prev [-] | | It's easy: Cursor are resellers, they optimize your token usage, so they can make a profit. Claude is the final point, and they offer tokens for the cheapest price possible. | | |
| ▲ | block_dagger 16 hours ago | parent [-] | | I use Cursor in MAX mode because my employer pays for the tokens. I probably should have mentioned that in my OP. It makes a huge difference. |
|
|
|
| ▲ | xentronium an hour ago | parent | prev | next [-] |
| > The shift to Claude Code? That took just hours of use for me to become productive. > This isn't failure; it's the process! > The biggest challenge? AI can't retain learning between sessions ai slop |
|
| ▲ | dakiol 19 hours ago | parent | prev | next [-] |
| To all the engineers using claude code: how do you submit your (well, claude’s) to review? Say, you have a big feature/epic to implement. Typically (pre-ai) times you would split it in chunks and submit each chunk as PR to be reviewed. You don’t want to submit dozens of file changes because nobody would review it. Now with llms, one can easily explain the whole feature to the machine and they would output the whole code just fine. What do you do? You divide it manually for review submission? One chunk after another? It’s way easier to let the agent code the whole thing if your
prompt is good enough than to give instructions bit by bit only because your colleagues cannot review a PR with 50 file changes. |
| |
| ▲ | athrowaway3z 19 hours ago | parent | next [-] | | Practically - you can commit it all after you're done and then tell it to tease apart the commit into multiple well documented logical steps. "Ask the LLM" is a good enough solution to an absurd number of situations. Being open to questioning your approach - or even asking the LLM (with the right context) to question your approach has been valuable in my experience. But from a more general POV, its something we'll have to spend the next decade figuring out. 'Agile'/scrum & friends is a sort of industry-wide standard approach, and all of that should be rethought - once a bit of the dust settles. We're so early in the change that I haven't even seen anybody get it wrong, let alone right. | |
| ▲ | yodsanklai 19 hours ago | parent | prev | next [-] | | I split my diffs like I've always did so they can be reviewed by a human (or even an AI which won't understand 50 file changes). The 50 file changes is most likely unsafe to deploy and unmaintainable. | |
| ▲ | Yoric 19 hours ago | parent | prev | next [-] | | I regularly write big MRs, then cut them into 5+ (sometimes 10+) smaller MRs. What does Claude Code change here? | | |
| ▲ | dakiol 19 hours ago | parent [-] | | The split seems artificial now. Before, an average engineer would produce code sequentially, chunk after chunk. Each chunk submitted only after the previous one was reviewed and approved.
Today, one could submit the whole thing for review. Also, if machines can write it, why not let machines review it too? Seems weird not to do so. | | |
| ▲ | Yoric 43 minutes ago | parent | next [-] | | Not sure I follow. The limitation has never been about the developer being able to write a complex feature in one MR. It has always been about the other developer not being able to review a complex MR. So far, nothing I've seen convinces me that machines can (yet) write or review code autonomously (although they can certainly be useful as assistants). Maybe some day. | |
| ▲ | Disposal8433 13 hours ago | parent | prev [-] | | Will the LLM take responsibility for the bugs and bad code introduced by the review? If it does and I'm free, then go for it. |
|
| |
| ▲ | edverma2 18 hours ago | parent | prev | next [-] | | I built a tool to split up a single PR into multiple nice commits: https://github.com/edverma/git-smart-squash | |
| ▲ | bongodongobob 19 hours ago | parent | prev [-] | | Do whatever you want. Tell it to make different patches in chunks if you want. It'll do what you tell it to do. |
|
|
| ▲ | josefrichter 18 hours ago | parent | prev | next [-] |
| I'm almost sure that we all ended up at the same set of rules and steps how to get the best out of Claude - mine are almost identical, others' I know as well :-) |
|
| ▲ | rester324 19 hours ago | parent | prev | next [-] |
| > If I were to give advice from an engineer's perspective, if you're a technical leader considering AI adoption:
>> Let your engineers adopt and test different AI solutions: AI-assisted coding is a skill that you have to practice to learn. I am sorry, but this is so out of touch with reality. Maybe in the US most companies are willing to allocate you 1000 or 1500 USD/month/engineer, but I am sure that in many countries outside of the US not even a single line (or other type of) manager will allocate you such a budget. I know for a fact that in countries like Japan you even need to present your arguments for a pizza party :D So that's all you need to know about AI adoption and what's driving it |
| |
| ▲ | LtWorf 15 hours ago | parent | next [-] | | I love how you are getting downvoted, probably by people who have never set foot outside the USA. | |
| ▲ | bongodongobob 18 hours ago | parent | prev [-] | | Depends on the culture. I worked at a place that did $100 million in sales a year and if the cost was less than $5k for something we needed, management said just fuckin do it, don't even ask. I also worked at a place that did $2 billion a year and they required multi-level approval for MS project pro licenses. All depends. Edit: Why is this downvoted? Different corp cultures have different ideas about what is worthwhile. Some places value innovation and experimentation and some places don't. |
|
|
| ▲ | lordnacho 19 hours ago | parent | prev | next [-] |
| I'm using Claude all the time now. It works, and I'm amazed it worked so easily for me. Here's what it looks like: 1) Summarize what I think my project currently does 2) Summarize what I think it should do 3) Give a couple of hints about how to do it 4) Watch it iterate a write-compile-test loop until it thinks it's ready I haven't added any files or instructions anywhere, I just do that loop above. I know of people who put their Claude in YOLO mode on multiple sessions, but for the moment I'm just sitting there watching it. Example: "So at the moment, we're connecting to a websocket and subscribing to data, and it works fine, all the parsing tests are working, all good. But I want to connect over multiple sockets and just take whichever one receives the message first, and discard subsequent copies. Maybe you need a module that remembers what sequence number it has seen?" Claude will then praise my insightful guidance and start making edits. At some point, it will do something silly, and I will say: "Why are you doing this with a bunch of Arc<RwLock> things? Let's share state by sharing messages!" Claude will then apologize profusely and give reasons why I'm so wise, and then build the module in an async way. I just keep an eye on what it tries, and it's completely changed how I code. For instance, I don't need to be fully concentrated anymore. I can be sitting in a meeting while I tell Claude what to do. Or I can be close to falling asleep, but still be productive. |
| |
| ▲ | abraxas 19 hours ago | parent [-] | | I tried to follow the same pattern on a backend project written in Python/FastAPI and this has been mostly a heartache. It gets kind of close but then it seems to periodically go off the rails, lose its mind and write utter shit. Like braindead code that has no chance of working. I don't know if this is a question of the language or what but I just have no good luck with its consistency. And I did invest time into defining various CLAUDE.md files. To no avail. | | |
| ▲ | ryandrake 17 hours ago | parent | next [-] | | What I find helpful in a large project is whenever Claude goes way off the rails, I correct it, and then tell it to update CLAUDE.md with instructions in its own words how to not do it again in the future. It doesn't stop the initial hallucinations and brainfarts, but it seems to be making the tool slowly better as it adds context for itself. | |
| ▲ | lordnacho 19 hours ago | parent | prev | next [-] | | Has this got anything to do with using a stronger typed language? I've heard that reported, not sure whether it's true since my python scripts tend to be short. Does it end in a forever loop for you? I used to have this problem with other models. | | |
| ▲ | adastra22 18 hours ago | parent [-] | | I also use Rust with Claude Code, like GP. I do not experience forever loops — Claude converges on a working compiling solution every time. Sometimes the solution is garbage, and many times it gets it to “work” by disabling the test. I have layers of scaffolding (critic agents) that prevent this from being something I have to deal with, most of the time. But yeah, strongly typed languages, test driven development, and good high quality compiler errors are real game changers for LLM performance. I use Rust for everything now. |
| |
| ▲ | wg0 19 hours ago | parent | prev [-] | | I can second that. Even on plain CRUD with little to no domain logic. |
|
|
|
| ▲ | cjonas 15 hours ago | parent | prev | next [-] |
| Once thing I've noticed is the difference in code quality by language. I'm constantly disappointed by the output of python code. I have to correct it to follow even the most basic software development principles (DRY, etc). Typescript on the other hand, seems to do much better on first pass. Still not always beautiful code, but much more application ready. My hypothesis is that this is due to the billions LOC of Jupyter Notebook it was probably trained on :/ |
| |
| ▲ | rcfox 15 hours ago | parent | next [-] | | With Typescript, I find it pretty eager to just try `(foo as any).bar` when it gets the initial typing wrong. It also likes to redefine types in every file they're used instead of importing. It will fix those if you catch them, but I haven't been able to figure out a prompt that prevents this in the first place. | |
| ▲ | __mharrison__ 13 hours ago | parent | prev [-] | | There's a LOT of bad/newbie Python code floating around. I find that if I'm specific, it does a good job. (I'm also passing in my code/notebooks as context, so one would assume that it is attempting to mirror my style.) |
|
|
| ▲ | furyofantares 20 hours ago | parent | prev | next [-] |
| I've come around on something like this. I start by putting a little effort into a prompt and into providing context, but not a ton - and see where Claude Code gets with it. It might even get what I asked for working in terms of features, but it's garbage code. This is a vibe session, not caring about the code at all, or hardly at all. I notice what worked and what didn't, what was good and what was garbage -- and also how my own opinion of what should be done changed. I have Claude Code help me update the initial prompt, help me update what should have been in the initial context, maybe add some of the bits that looked good to the initial context as well, and then write it all to a file. Then I revert everything else and start with a totally blank context, except that file. In this session I care about the code, I review it, I am vigilant to not let any slop through. I've been trying for the second session to be the one that's gonna work -- but I'm open to another round or two of this iteration. |
| |
| ▲ | soperj 20 hours ago | parent [-] | | and do you find this takes longer or shorter than just doing it yourself from scratch? | | |
| ▲ | shinecantbeseen 20 hours ago | parent | next [-] | | I’m with you. Sometimes it really just feels like we’re just tacking on the cognitive load of managing the drunk senior in addition to the problem of hand instead of just dealing with the problem at hand. | | |
| ▲ | sfjailbird 20 hours ago | parent [-] | | A hundred times more time is spent reading a given piece of code, than it took writing it, in the lifetime of that program. OK I made up the statistic, but the core idea is true, and it's something that is rarely considered in this debate. At least with code you wrote, you can probably recognize it later when you need to maintain it or just figure out what it does. | | |
| ▲ | adastra22 18 hours ago | parent [-] | | Most code is never read, to be honest. | | |
| ▲ | furyofantares 17 hours ago | parent [-] | | In the olden days I read the code I wrote probably 2-3 times while in the process of reading it, and then almost always once in full just before submitting it. |
|
|
| |
| ▲ | furyofantares 20 hours ago | parent | prev | next [-] | | Quite a bit shorter. Plus I can do the a good chunk of the work (first iteration) in contexts where I couldn't before, where I require less focus, and it uses less of my energy. I think I can also end up with a better result, and having learned more myself. It's just better in a whole host of directions all at once. I don't end up intimately familiar with the solution however. Which I think is still a major cost. | |
| ▲ | bongodongobob 20 hours ago | parent | prev [-] | | Not OP, I don't care if it's the same amount of time because I can do it drunk/while doing other things. Not sure why how long does it take is the be all end all for some people. |
|
|
|
| ▲ | drudolph914 15 hours ago | parent | prev | next [-] |
| to throw my hat into the ring, I am in no way shy about using the AI tooling and I like using it, but I am happy we're finally seeing people talk about AI that matches with my personal reality with the tools. for the record, I've been bullish on the tooling from the beginning My dev-tooling AI journey has been chatGPT -> vscode + copilot -> early cursor adopter -> early claude + cursor adopter -> cursor agent with claude -> and now claude code I've also spent a lot of time trying out self-hosted LLMs such as couple version of Qwen coder 2.5/3 32B, as well as deepseek 30B - and talking to them through the vscode continue.dev extension My personal feelings are that the AI coding/tooling industry has seen a major plateau in usefulness as soon as agents became apart of the tooling. The reality is coding is a highly precise task, and LLMs down to the very core of the model architecture are not precise in the way coding needs them to be. and it's not that I don't think we won't one day see coding agents, but I think it will take a deep and complete bottom up kind of change and an possibly an entirely new model architecture to get us to what people imagine a coding agent is I've accepted to just use claude w/ cursor and to be done with experimenting. the agent tooling just slows my engineering team down I think the worst part about this dev tooling space is the comment sections on these kinds of articles is completely useless. it's either AI hype bots just saying non-sense, or the most mid an obvious takes that you here everywhere else. I've genuinely have become frustrated with all this vague advice and how the AI dev community talks about this domain space. there is no science, data, or reason as to why these things fail or how to improve it I think anyone who tries to take this domain space seriously knows that there's limit to all this tooling, we're probably not going to see anything group breaking for a while, and there doesn't exist a person, outside the AI researchers at the the big AI companies, that could tell ya how to actually improve the performance of a coding agent I think that famous vibe-code reddit post said it best "what's the point of using these tools if I still need a software engineer to actually build it when I'm done prototyping" |
|
| ▲ | sigmonsays 18 hours ago | parent | prev [-] |
| every god damn time AI hallucinates a solution that is not real (in ChatGPT) I havn't put a huge effort into learning to write prompts but in short, it seems easier to write the code myself than determine prompts. If you don't know every detail ahead of time and ask a slightly off question, the entire result will be garbage. |