| ▲ | badsectoracula 6 hours ago |
| > OpenClaw has nearly half a million lines of code, 53 config files, and over 70 dependencies. This breaks the basic premise of open source security. Chromium has 35+ million lines, but you trust Google’s review processes. Most open source projects work the other way: they stay small enough that many eyes can actually review them. Nobody has reviewed OpenClaw’s 400,000 lines. This reminds me of a very common thing posted here (and elsewhere, e.g. Twitter) to promote how good LLMs are and how they're going to take over programming: the number of lines of code they produce. As if every competent programmer suddenly forgot the whole idea of LoC being a terrible metric to measure productivity or -even worse- software quality. Or the idea that software is meant to written to be readable (to water down "Programs are meant to be read by humans and only incidentally for computers to execute" a bit). Or even Bill Gates' infamous "Measuring programming progress by lines of code is like measuring aircraft building progress by weight". Even if you believe that AI will -somehow- take over the whole task completely so that no human will need to read code anymore, there is still the issue that the AIs will need to be able to read that code and AIs are much worse at doing that (especially with their limited context sizes) than generating code, so it still remains a problem to use LoCs as such a measure even if all you care are about the driest "does X do the thing i want?" aspect, ignoring other quality concerns. |
|
| ▲ | gyomu 6 hours ago | parent | next [-] |
| Yeah, it’s pretty wild. Even pg is tweeting stuff like “An experienced programmer told me he's now using AI to generate a thousand lines of code an hour.“ https://x.com/paulg/status/2026739899936944495 Like if you had told pg to his face in (pre AI) office hours “I’m producing a thousand lines of code an hour”, I’m pretty sure he’d have laughed and pointed out how pointless that metric was? |
| |
| ▲ | ruszki 2 hours ago | parent | next [-] | | I don't understand how some people decide here, who the good programmers are. A lot of people reminded me a guy from West Palm Beach, who votes on elections solely on the principle of who has more "fame". Paul Graham is famous for sure (at least in HN circles), but I never considered him an exceptional or good programmer at all. So I always interpreted his words with a hefty amount of grain of salt. And sometimes some comments have a list of "good" coders, then half of them is like these famous, but not good ones. | | |
| ▲ | TacticalCoder 2 hours ago | parent [-] | | > Paul Graham is famous for sure (at least in HN circles), but I never considered him an exceptional or good programmer at all. pg wrote a Lisp dialect, Arc, with Morris. The Morris from "the Morris worm". These people are at the very least hackers and they definitely know how to code. I don't think a "not good programmer" can write a Lisp dialect. At least of all the "not good" programmers I met in my life, 0% of them could have written a Lisp dialect. It's not because Arc didn't reach the level of fame of Linux or Quake or Kubernetes or whatever that pg is not a good programmer. | | |
| ▲ | steveklabnik 12 minutes ago | parent | next [-] | | > I don't think a "not good programmer" can write a Lisp dialect. You can write a lisp in 145 lines of Python: https://norvig.com/lispy.html | |
| ▲ | eichin 16 minutes ago | parent | prev | next [-] | | Presumably he got better in the intervening decades, but part of how we stopped the Morris Worm was that it was badly written (see the various version of With Microscope and Tweezers for detail, particularly about the "am I already running" check that ended up being why it got noticed, "because exponential growth".) Even for "bored 1st year grad student skipping lectures" it should have been better code :-) (Also, writing a Scheme dialect was a first-semester CS problem set - if you're in a 1980s academic CS environment it was more effort to not accidentally write a lisp interpreter into something, something in the water supply...) | |
| ▲ | ruszki an hour ago | parent | prev | next [-] | | I met a coder, who has several self made programming languages, and I would never allow him anywhere near any codebases for which I'm responsible. So writing a Lisp dialect, is not something which makes you a good coder for sure. Even as a junior you can do that. Making it good, and be able to really reason for choices is a different story. I've never seen any good new reasoning from Graham like for example how Dan Abramov do all the time. They are not even close, and definitely not in favor of Graham. | |
| ▲ | aerhardt an hour ago | parent | prev [-] | | I take him to be a good programmer on top of a pioneer venture capitalist and entrepreneur but Hackers and Painters contains some pretty bad predictions and takes on programming, and if he didn't have that good taste or foresight then, I can't imagine what it's like now. |
|
| |
| ▲ | medi8r 6 hours ago | parent | prev | next [-] | | He is a Lisper too, making it more ironic. Lisp the power to heavily reduce cruft by heavy customization with macros. | | | |
| ▲ | manoDev 5 hours ago | parent | prev | next [-] | | They need to keep the musical chairs going. | |
| ▲ | amelius 5 hours ago | parent | prev | next [-] | | Technical debt is increasing by 1,000 lines an hour. | |
| ▲ | steve1977 4 hours ago | parent | prev | next [-] | | We all know that a thousand parentheses would be better metric. | |
| ▲ | wiseowise 6 hours ago | parent | prev | next [-] | | It’s all virtual virtue signaling. If you were to say this shit in the office, you’d be walked out pretty fast. | | |
| ▲ | Zak 3 hours ago | parent | next [-] | | Who is signaling what virtues to whom in this context? When I see PG write something like that, it signals to me that he has embraced AI hype to the point that he is displaying poor taste and embracing a risky technical practice. | | | |
| ▲ | andrei_says_ 3 hours ago | parent | prev [-] | | Maybe it depends on whose office? C-suite management who salivate after reducing software engineer headcount? |
| |
| ▲ | ElProlactin 6 hours ago | parent | prev [-] | | Enshittification comes for us all |
|
|
| ▲ | supriyo-biswas 5 hours ago | parent | prev | next [-] |
| Somehow, this narrative has taken hold at multiple levels of management, especially amongst non-technical management, that "typing" was somehow the bottleneck of software engineering, reality is however more complex. The act of "typing" code was technically mixed in with researching solutions, which means that code often took a different shape or design based on the outcome of that activity. However, this nuance has been typically ignored for faff, with the outcome that management thinks that producing X lines of code can be done "quickly", and people disagreeing with said statements are heretics who should be burned at the stake. This is why, in my personal opinion, AI makes me only 20% productive, I often find disagreeing with the solution that it came up with and instead of having to steer it to obtain the outcome I want, I just end up rewriting the code myself. On the other hand, for prototypes where I don't care about understanding the code at all, it is more of a bigger time saver. I could not care about the code at all, and while that is acceptable to management, not being responsible for the code but being responsible for the outcomes seems to be the same shit as being given responsibilities without autonomy, which is not something I can agree with. |
| |
| ▲ | jorvi 2 hours ago | parent [-] | | AI is good at the first 80% but terrible at the last 20% of producing good code. And you need to through that first 80% to really understand what the code is scaffolded to do, which writing it yourself will vastly improve. And typing speed has never been the bottleneck for coding. Even worse, whole generation of devs are being trained to not care of learn about that last 20% because the AI does it """all""" for them. That last bit is an unknown unknown for the neo developer nee prompter. |
|
|
| ▲ | hirako2000 5 hours ago | parent | prev | next [-] |
| More people believe a software developer job and value is in the lines of code produced. Perhaps over half of engineering managers unconsciously or admittedly take the amount of PR and code additions as a rough but valid measure of productivity. I recall a role in architecture, senior director asking me how come a principal engineer didn't commit any code in 2 weeks, that we pay principals a fortune. I asked that brilliant mind whether we paid principal engineers to code or to make sure we deliver value. Needless to say the with question went unanswered, so called Principal was fired a few months later. The entire company in fact was sold for a bargain too given it had thousands of clients globally. The LLM can replace engineers is a phenomenon that converge from two simple facts, we haven't solved the misconception of the engineering roles. And it's the perfect scapegoat to justify layoffs. Leaders haven't all gone insane, they answer to difficult questions with the narrative of least resistance. |
| |
| ▲ | andrei_says_ 3 hours ago | parent [-] | | > Leaders haven't all gone insane, they answer to difficult questions with the narrative of least resistance. Brilliantly said. I’d like to add - a distorted narrative actively, intentionally established and maintained by the entities profiting from the technology. Quite similar to the crypto scam hype cycle. |
|
|
| ▲ | MadxX79 6 hours ago | parent | prev | next [-] |
| Brook's law anno 2026: "Adding manpower to a late software project makes it later -- unless that manpower is AI, then you're golden!" |
| |
| ▲ | steveklabnik 10 minutes ago | parent | next [-] | | I know you're being sarcastic, but this is what OpenAI has said: https://openai.com/index/harness-engineering/ > This translates to an average throughput of 3.5 PRs per engineer per day, and surprisingly the throughput has increased as the team has grown to now seven engineers. We will see if this continues to scale up! | |
| ▲ | smikhanov 6 hours ago | parent | prev [-] | | That law (formulated in the 70s, I’ll remind the reader) wasn’t true for at least couple decades now. | | |
| ▲ | medi8r 6 hours ago | parent [-] | | Why not? What changed? It seems like a human factors thing. New people have to get up to speed. Doers become trainers. | | |
| ▲ | smikhanov 5 hours ago | parent [-] | | Several related reasons working at once. The nature of work changed. The boundary between accidental and incidental complexity shifted (and it’s unclear whether this distinction still exists). Niche specializations within the field emerged. The way to structure and decompose projects changed dramatically (agile and stuff). One pathological example: if you’re running a server-based product, quite often what stands between you and a new feature launch is literally couple of thousands of lines of Kubernetes YAML. Would adding someone who’s proficient in Kubernetes slow you down? Of course not. One may say, hey, this is just the server-side Kubernetes-based development being insane, and I’ll say, the whole modern business of software development is like this. | | |
| ▲ | medi8r 5 hours ago | parent [-] | | Hmm interesting, thanks! I was ready to argue but now I have to think, which is even better. | | |
| ▲ | smikhanov 5 hours ago | parent [-] | | That’s a lovely comment, thank you. If you’re keen to think about it more, consider the fact that the existing members of the project that’s being late are actually in not as much of an advantage compared to the new joiners, as it’s common to think. Yes, they know how the feature they work on relates to other features, but actually implementing that feature is very often mostly involves fighting with technology, wrangling the entire stack into the shape you need. In Brooks’s times the stack was paper-thin, almost nonexistent. In modern times it’s not, and adding someone who knows the technology, but doesn’t have the domain knowledge related to your feature still helps you. It doesn’t slow you down. One may argue that I’m again pointing to the difference between accidental and incidental complexity, and my argument is essentially “accidental complexity takes over”, but accidental complexity actually does influence your feature too, by defining what’s possible and what’s not. Some good thoughts (not mine) on the modern boundary between accidental and incidental complexity: https://danluu.com/essential-complexity/ | | |
| ▲ | dasil003 4 hours ago | parent [-] | | I sort of agree that the surface area and incidental complexity of stacks give more space to plug more developers in than was true in the 70s and 80s. But I disagree strongly this invalidates Brooks Law. Certainly there are cases where adding people helps, especially if they are stronger engineers than the ones that are already there, but I’ve also seen way too many projects devolve into resourcing conversations when the real problem was over-complicated, poorly reasoned requirements, boil-the-ocean solutions promising a perfect end state without a clear plan to get there iteratively. | | |
| ▲ | ldng 3 hours ago | parent [-] | | Plus, the "since there are more resources, let's add features" effect. |
|
|
|
|
|
|
|
|
| ▲ | bee_rider 4 hours ago | parent | prev | next [-] |
| “LoC is a bad metric” has been the catchphrase of engineers for years, because it runs counter to the expectations of management and the general public, right? So it makes sense that LoC is the metric used to advertise to them. |
|
| ▲ | tdeck 5 hours ago | parent | prev | next [-] |
| I asked Grok to rewrite your comment and it did it in 2400 words. I hope you know you'll be obsolete soon. |
|
| ▲ | danjc 2 hours ago | parent | prev | next [-] |
| I've been waiting for someone to say this. An agent will generally produce far more code than technically necessary for the task. It's a kind of over engineering which makes it increasingly harder to wrap your head around the codebase. |
| |
| ▲ | truthbe 44 minutes ago | parent [-] | | Over engineered implies the codebase was inflated with some kind of rationale by the AI, but there is none. It's just code vomit with duct tape |
|
|
| ▲ | samiv 3 hours ago | parent | prev | next [-] |
| That's because they're an additive tool. Everything boils down to "adding" more code. But in the long term its not about how much code you can add but how little you can get away with. But this is an impossible task for the LLMs. How would you train one not to write code? What would the training data look like? Would that be all the lines of code that haven't been written? |
| |
| ▲ | skeledrew 3 hours ago | parent | next [-] | | TDD would help here, particularly if a human writes - or at least thoroughly reviews - the tests. https://martinfowler.com/bliki/TestDrivenDevelopment.html | |
| ▲ | simgt 2 hours ago | parent | prev | next [-] | | Well they will train on my Claude Code sessions for a start. I spend a lot of time asking it to remove unnecessary code that was produced, I'm not the only one. | |
| ▲ | tartoran 3 hours ago | parent | prev [-] | | That’s not an impossible task with LLMs, you just have to mindfully architect the project with that in mind, hence take it slowly to design a good system, don’t outsource all thinking to LLMs. |
|
|
| ▲ | K0balt 3 hours ago | parent | prev | next [-] |
| It’s definitely an issue when using coding assistants. If you are careful and specific you can keep things reasonable, but even when I am careful and do consolidattion / factoring passes, have rigid separation of concerns, etc I find that the LLM code is bigger than mine, mainly for two reasons: 1) more extensive inline documentation
2) more complete expression of the APIs across concerns, as well as stricter separation. 2.5 often, also a bit of demonstrative structure that could be more concise but exists in a less compact form to demonstrate it’s purpose and function (high degree of cleverness avoidance) All in all, if you don’t just let it run amok, you can end up with better code and increased productivity in the same stroke, but I find it comes at about a 15% plumpness penalty, offset by readability and obvious functionality. Oh, forgot to mention, I always make it clean room most of the code it might want to pull in from libraries, except extremely core standard libraries, or for the really heavy stuff like Bluetooth / WiFi protocol stacks etc. I find a lot of library type code ends up withering away with successive cleanup passes, because it wasn’t really necessary just cognitively easier to implement a prototype. With refinement, the functionality ends up burrowing in, often becoming part of the data structure where it really belonged in the first place. |
|
| ▲ | sd9 6 hours ago | parent | prev | next [-] |
| LLMs are incredibly eager to write new code, rather than modifying or integrating with existing systems. I agree that context windows are too small currently for this to seem sustainable. Without reasonable architecture pure vibe coded software feels like it’s going to cap out at a certain size. |
|
| ▲ | KronisLV 5 hours ago | parent | prev | next [-] |
| As lines of code become executable line noise, I swear that we need better approaches to developing software - either enforce better test coverage across the board, develop and use languages where it’s exceedingly hard to end up with improper states, or sandbox the frick out of runtimes and permissions. Just as an example, I should easily be able to give each program an allowlist of network endpoints they’re allowed to use for inbound and outgoing traffic and sandbox them to specific directories and control resource access EASILY. Docker at least gets some of those right, but most desktop OSes feel like the Wild West even when compared to the permissions model of iOS. |
|
| ▲ | CuriouslyC 5 hours ago | parent | prev | next [-] |
| The lines of code thing isn't because we think it's a good metric, but because we have literally no good metric and we're trying to communicate a velocity difference. If you invent a new metric that doesn't have LoC's problems while being as easy to use, you'll be a household name in software engineering in short order. Also, AI is better at reading code than writing it, but the overhead to FIND code is real. |
| |
|
| ▲ | ninkendo 5 hours ago | parent | prev | next [-] |
| Respectfully, it feels like your position requires a very low, if not brain-dead level of incompetence on the part of LLM users, in order for your conclusion to be correct. My personal anecdote: I used an LLM recently to basically vibe code a password manager. Now, I’ve been a software engineer for 20 years. I’m very familiar with the process of code review and how to dive in to someone else’s code and get a feel for what’s happening, and how to spot issues. So when I say the LLM produced thousands of lines of working code in a very short time (probably at least 10 times faster than I would have done it), you could easily point at me and say “ha, look at ninkendo, he thinks more lines of code equals better!” And walk away feeling smug. Like, in your mind perhaps you think the result is an unmaintainable mess, and that the only thing I’m gushing about is the LOC count. But here’s the thing: it actually did a good job. I was personally reviewing the code the whole time. And believe me when I say, the resulting product is actually good. The code is readable and obvious, it put clean separation of responsibilities into different crates (I’m using rust) and it wrote tons of tests, which actually validate behavior. It’s very near the quality level of what I would have been able to do. And I’m not half bad. (I’ve been coding in rust in particular, professionally for about 2 years now, on top of the ~20 years of other professional programming experience before that.) My takeaway is that as a professional engineer, my job is going to be shifting from doing the actual code writing, to managing an LLM as if it’s my pair programming partner and it has the keyboard. I feel sad for the loss of the actual practice of coding, but it’s all over but the mourning at this point. This tech is here to stay. |
| |
| ▲ | FEELmyAGI 3 hours ago | parent | next [-] | | This whole reply, and every other "anecdote" reply is more worthless than the pixels its printed on, without a link to your "actually did a good job" password manager. (wow funny how these vibe code apps always are copies of something theres many open source versions of already) | | |
| ▲ | ninkendo 2 hours ago | parent [-] | | Ugh, you made me spend the 20 minutes it takes to spin up a new github account to share this (my existing one uses my real name and I don't really want to doxx myself that much. Not that it's a huge deal, my real identity and the "ninkendo" handle have been intertwined a lot in the past.) https://github.com/ninkendo84/kenpass I'm not saying it's perfect, there's some things I would've done differently in the code. It's also not even close to done/complete, but it has: - A background agent that keeps the unsealed vault in-memory - A CLI for basic CRUD - Encryption for the on-disk layout that uses reasonably good standards (pbkdf2 with 600,000 iterations, etc) - Sync with any server that supports webdav+etags+mTLS auth (I just take care of this out of band, I had the LLM whip up the nginx config though) - A very basic firefox extension that will fill passwords (I only did 2 or 3 rounds of prompting for that one, I'm going to add more later) Every commit that was vibe-coded contains the prompt I gave to Codex, so you can reproduce the entire development yourself if you want... A few of the prompts were actually constructed by ChatGPT 5.2. (It started out as a conversation with ChatGPT about what the sync protocol would look like for a password manager in a way that is conflict-free, and eventually I just said "ok give me a prompt I can give to codex to get a basic repo going" and then I just kept building from there.) Also full disclosure, it had originally put all the code for each crate in a single lib.rs, so I had it split the crates into more modules for readability, before I published but after I made the initial comment in this thread. I haven't decided if I want to take this all the way to something I actually use full time, yet. I just saw the 1password subscription increase and decided "wait what if I just vibe-coded my own?" (I also don't think it's even close to worthy of a "Show HN", because literally anybody could have done this.) | | |
| ▲ | FEELmyAGI 28 minutes ago | parent [-] | | Thank you for the time commitment based on an internet forum comment. I appreciate greatly the succinct human written README. Did you investigate prior art before setting out on this endeavor?
https://www.google.com/search?q=site%3Agithub.com+password+m... I ask because engineers need to be clever and wise. Clever means being capable of turning an idea into code, either by writing it or recently by having the vocabulary and eloquence to prompt an LLM. Wisdom means knowing when and where to apply cleverness, and where not to. like being able to recognize existing sub-components. | | |
| ▲ | ninkendo 15 minutes ago | parent [-] | | > Did you investigate prior art before setting out on this endeavor Lol no, I had no idea there was any other password managers! Thanks for the google search link! I didn't know search engines existed either! > Wisdom means knowing when and where to apply cleverness, and where not to. like being able to recognize existing sub-components. It says literally in the README that part of this is an exercise in seeing what an LLM can do. I am in no way suggesting anyone use this (because there's a bazillion other password managers already) nor would I even have made this public if you hadn't baited me into doing it. The fact that there's a literal sea of password managers out there is why I'm curious enough to think "maybe a one that I get to design myself, written to exactly my tastes and my tastes alone could be feasible", and that's what this exercise is about. It literally took me less time to vibe-code what I have right now, than to pour through the sea of options that already exist to decide which one I should try. And having it be mine at the end means that I can implement my pet features the way I want, without having to worry one bit about fighting with upstream maintainers. It's also just fun. I thoroughly enjoy the process of thinking about the design and iterating on it. |
|
|
| |
| ▲ | bee_rider 4 hours ago | parent | prev [-] | | If you measure the productivity of the system that is “you, using an LLM” in terms of the rate at which you can get actually-reviewed code completed (which, based on your comment, seems to be what you were doing) that seems like a totally reasonable way of doing things. But in that case the bottleneck is probably you reviewing code, right? Which, I bet, is faster than writing code. But you probably won’t get the truly absurd superhuman speed ups. What would you say is your multiplier, in terms of throughly reviewing code vs writing it from scratch? | | |
| ▲ | ninkendo an hour ago | parent [-] | | Yeah, I guess that's kinda my point. LLM detractors on HN seem to straw-man what they think the average LLM user is doing. I'm an experienced programmer who is using an LLM as a speed boost, and the result of that is that it produces thousands of lines of code in a short time. The impressive thing isn't merely that it produces thousands of lines of code, it's that I've reviewed the code, it's pretty good, it works, and I'm getting use out of the resulting project. > What would you say is your multiplier, in terms of throughly reviewing code vs writing it from scratch? I'd say about 10x. More than that (and closer to 100x) if I'm only giving the code a cursory glance (sometimes I just look at the git diff, it looks pretty damned reasonable to me, and I commit it without diving that deep into the review. But I sometimes do something similar when reviewing coworkers' code!) |
|
|
|
| ▲ | inciampati 5 hours ago | parent | prev | next [-] |
| Lines of code are nothing. It's verification that creates value. |
|
| ▲ | wredcoll 5 hours ago | parent | prev | next [-] |
| Really it just continues to demonstrate that "code quality" is not and was not a requirement. Even with supposedly expert human hand written software powering our products for the last decades, they frequently crash, have outages, and show all sorts of smaller bugs. There are literally too many examples to count of video games being released with nigh-unplayable amounts of bugs and still selling millions and producing sequels. Windows 95 and friends were famously buggy and crash prone yet produced one of the most valuable companies in the world. |
|
| ▲ | theptip 3 hours ago | parent | prev | next [-] |
| Yeah, I would view this as a “levels of maturity” thing. It’s not completely misguided to judge a JD on whether they shipped 0loc or 1kloc. Assuming you have some quality counter-metric like “the app works”. For staff engineers it’s obviously completely nonsense, many don’t code and just ship architecture docs. Or you can ship a net negative refactor. Etc. So this should tell you that LLMs are still in “savant JD” territory. That said, being given permission to ship more lines of code under existing enterprise quality bars _is_ a meaningful signal. |
|
| ▲ | spacecadet 6 hours ago | parent | prev [-] |
| I mean many of us have... I operate in a net negative mindset. My PRs, better remove more than they add. I also use AI this way, periodically achieving a net negative refactor. |