Remix.run Logo
The highest quality codebase(gricha.dev)
309 points by Gricha 3 days ago | 234 comments
xnorswap 4 hours ago | parent | next [-]

Claude is really good at specific analysis, but really terrible at open-ended problems.

"Hey claude, I get this error message: <X>", and it'll often find the root cause quicker than I could.

"Hey claude, anything I could do to improve Y?", and it'll struggle beyond the basics that a linter might suggest.

It suggested enthusiastically a library for <work domain> and it was all "Recommended" about it, but when I pointed out that the library had been considered and rejected because <issue>, it understood and wrote up why that library suffered from that issue and why it was therefore unsuitable.

There's a significant blind-spot in current LLMs related to blue-sky thinking and creative problem solving. It can do structured problems very well, and it can transform unstructured data very well, but it can't deal with unstructured problems very well.

That may well change, so I don't want to embed that thought too deeply into my own priors, because the LLM space seems to evolve rapidly. I wouldn't want to find myself blind to the progress because I write it off from a class of problems.

But right now, the best way to help an LLM is have a deep understanding of the problem domain yourself, and just leverage it to do the grunt-work that you'd find boring.

pdntspa 4 hours ago | parent | next [-]

That's why you treat it like a junior dev. You do the fun stuff of supervising the product, overseeing design and implementation, breaking up the work, and reviewing the outputs. It does the boring stuff of actually writing the code.

I am phenomenally productive this way, I am happier at my job, and its quality of work is extremely high as long as I occasionally have it stop and self-review it's progress against the style principles articulated in its AGENTS.md file. (As it tends to forget a lot of rules like DRY)

n4r9 4 hours ago | parent | next [-]

I think we have different opinions on what's fun and what's boring!

Nemi 3 hours ago | parent | next [-]

You've really hit the crux of the problem and why so many people have differing opinions about AI coding. I also find coding more fun with AI. The reason is that my main goal is to solve a problem, or someone else's problem, in a way that is satisfying. I don't much care about the code itself anymore. I care about the thing that it does when it's done.

Having said that I used to be deep into coding and back then I am quite sure that I would hate AI coding for me. I think for me it comes down to – when I was learning about coding and stretching my personal knowledge in the area, the coding part was the fun part because I was learning. Now that I am past that part I really just want to solve problems, and coding is the means to that end. AI is now freeing because where I would have been reluctant to start a project, I am more likely to give it a go.

I think it is similar to when I used to play games a lot. When I would play a game where you would discover new items regularly, I would go at it hard and heavy up until the point where I determined there was either no new items to be found or it was just "more of the same". When I got to that point it was like a switch would flip and I would lose interest in the game almost immediately.

breuleux 41 minutes ago | parent | next [-]

I think it ultimately comes down to whether you care more about the what, or more about the how. A lot of coders love the craft: making code that is elegant, terse, extensible, maintainable, efficient and/or provably correct, and so on. These are the kind of people who write programming languages, database engines, web frameworks, operating systems, or small but nifty utilities. They don't want to simply solve a problem, they want to solve a problem in the "best" possible way (sometimes at the expense of the problem itself).

It's typically been productive to care about the how, because it leads to better maintainability and a better ability to adapt or pivot to new problems. I suppose that's getting less true by the minute, though.

agumonkey an hour ago | parent | prev | next [-]

it's true that 'code' doesn't mean much, but the ability to manage different layers, states to produce logic modules was the challenge

getting things solved entirely feels very very numbing to me

even when gemini or chatgpt solves it well, and even beyond what i'd imagine.. i feel a sense of loss

pdntspa 3 hours ago | parent | prev [-]

You are hitting the nail on the head. We are not being hired to write code. We are being hired to solve problems. Code is simply the medium.

wahnfrieden 2 hours ago | parent | next [-]

I believe wage work has a significant factor in all this.

Most are not paid for results, they're paid for time at desk and regular responsibilities such as making commits, delivering status updates, code reviews, etc. - the daily activities of work are monitored more closely than the output. Most ESOP grant such little equity that working harder could never observably drive an increase in its value. Getting a project done faster just means another project to begin sooner.

Naturally workers will begin to prefer the motions of the work they find satisfying more than the result it has for the business's bottom line, from which they're alienated.

agumonkey an hour ago | parent | prev [-]

but do you solve the problem if you just slap a prompt and iterate while the LLM gathers diffs ?

embedding-shape 3 hours ago | parent | prev | next [-]

Some people are into designing software, others like to put the design into implementation, others like cleaning up implementations yet others like making functional software faster.

There is enough work for all of us to be handsomely paid while having fun doing it :) Just find what you like, and work with others who like other stuff, and you'll get through even the worst of problems.

For me the fun comes not from the action of typing stuff with my sausage fingers and seeing characters end up on the screen, but basically everything before that and after that. So if I can make "translate what's in my head into source on disk something can run" faster, that's a win in my book, but not if the quality degrades too much, so tight control over it still not having to use my fingers to actually type.

mkehrt 2 hours ago | parent [-]

I've found that good tab AI-based tab completion is the sweet spot for me. I am still writing code, but I don't have to type all of it if it's obvious.

AStrangeMorrow 3 hours ago | parent | prev | next [-]

I really enjoy writing some of the code. But some is a pain. Never have fun when the HQ team asks for API changes for the 5th time this month. Or for that matter writing the 2000 lines of input and output data validation in the first place. Or refactoring that ugly dictionary passed all over the place to be a proper class/dataclass. Handling config changes. Lots of that piping job.

Some tasks I do enjoy coding. Once in the flow it can be quite relaxing.

But mostly I enjoy the problem solving part: coming up with the right algorithm, a nice architecture , the proper set of metrics to analyze etc

moffkalast 3 hours ago | parent | prev [-]

He's a real straight shooter with upper management written all over him.

wpasc 3 hours ago | parent | next [-]

but what would you say... you do here?

SoftTalker 2 hours ago | parent | prev [-]

Ummm, yeah... I’m gonna have to go ahead and sort of disagree with you there.

FeteCommuniste 4 hours ago | parent | prev | next [-]

Maybe I'm weird but I enjoy "actually writing the code."

vitro 3 hours ago | parent | next [-]

I sometimes think of it as a sculptor analogy.

Some famous sculptors had an atelier full of students that helped them with mundane tasks, like carving out a basic shape from a block of stone.

When the basic shape was done, the master came and did the rest. You may want to have the physical exercise of doing the work yourself, but maybe someone sometimes likes to do the fine work and leave the crude one to the AI.

breuleux 3 hours ago | parent | prev | next [-]

In my case, it really depends what. I enjoy designing systems and domain-specific languages or writing libraries that work the way I think they should work.

On the other hand, if e.g. I need a web interface to do something, the only way I can enjoy myself is by designing my own web framework, which is pretty time-consuming, and then I still need to figure out how to make collapsible sections in CSS and blerghhh. Claude can do that in a few seconds. It's a delightful moment of "oh, thank god, I don't have to do this crap anymore."

There are many coding tasks that are just tedium, including 99% of frontend development and over half of backend development. I think it's fine to throw that stuff to AI. It still leaves a lot of fun on the table.

nyadesu 4 hours ago | parent | prev | next [-]

In my case, I enjoy writing code too, but it's helpful to have an assistant I can ask to handle small tasks so I can focus on a specific part that requires attention to detail

FeteCommuniste 4 hours ago | parent [-]

Yeah, I sometimes use AI for questions like "is it possible to do [x] using library [y] and if so, how?" and have received mostly solid answers.

stouset 3 hours ago | parent | next [-]

Or “can you prototype doing A via approaches X, Y, and Z, and show me what each looks like?”

I love to prototype various approaches. Sometimes I just want to see which one feels like the most natural fit. The LLM can do this in a tenth of the time I can, and I just need to get a general idea of how each approach would feel in practice.

skydhash 3 hours ago | parent [-]

> Sometimes I just want to see which one feels like the most natural fit.

This sentence alone is a huge red flag in my books. Either you know the problem domain and can argue about which solution is better and why. Or you don't and what you're doing are experiment to learn the domain.

There's a reason the field is called Software Engineering and not Software Art. Words like "feels" does not belongs. It would be like saying which bridge design feels like the most natural fit for the load. Or which material feels like the most natural fit for a break system.

mjr00 3 hours ago | parent | next [-]

> There's a reason the field is called Software Engineering and not Software Art. Words like "feels" does not belongs.

Software development is nowhere near advanced enough for this to be true. Even basic questions like "should this project be built in Go, Python, or Rust?" or "should this project be modeled using OOP and domain-driven design, event-sourcing, or purely functional programming?" are decided largely by the personal preferences of whoever the first developer is.

skydhash an hour ago | parent [-]

Such questions may be decided by personal preferences, but their impact can easily be demonstrated. Such impacts are what F. Brooks calls accidental complexity and we generally called technical debt. It's just that, unlike other engineering fields, there are not a lot of physical constraints and the decision space have much more dimensions.

mjr00 an hour ago | parent [-]

> Such questions may be decided by personal preferences, but their impact can easily be demonstrated.

I really don't think this is true. What was the demonstrated impact of writing Terraform in Go rather than Rust? Would writing Terraform in Rust have resulted in a better product? Would rewriting it now result in a better product? Even among engineers with 15 years experience you're going to get differing answers on this.

skydhash 30 minutes ago | parent [-]

The impact is that now, if you want to modify the project in some way, you will need to learn Go. It's like all the codebases in COBOL. Maybe COBOL at that time was the best language for the product, but now, it's not that easy to find someone with the knowledge to maintain the system. As soon as you make a choice, you accept that further down the line, there will be some X cost to keep going in that direction and some Y cost to revert. As a technical lead, more often you need to ensure that X or/and Y don't grow to be enormous.

mjr00 10 minutes ago | parent [-]

> The impact is that now, if you want to modify the project in some way, you will need to learn Go.

That's tautologically true, yes, but your claim was

> Either you know the problem domain and can argue about which solution is better and why. Or you don't and what you're doing are experiment to learn the domain.

So, assuming the domain of infrastructure-at-code is mostly known now which is a fair statement -- which is a better choice, Go or Rust, and why? Remember, this is objective fact, not art, so no personal preferences are allowed.

fluidcruft 3 hours ago | parent | prev [-]

For example sometimes you're faced with choosing between high-quality libraries to adopt and it's not particularly clear whether you picked the wrong one until after you've tried integrating them. I've found it can be pretty helpful to let the LLM try them all and see where the issues ultimately are.

skydhash an hour ago | parent [-]

> sometimes you're faced with choosing between high-quality libraries to adopt and it's not particularly clear whether you picked the wrong one until after you've tried integrating them.

Maybe I'm lucky, but I've never encountered this situation. It has been mostly about what tradeoffs I'm willing to make. Libraries are more line of codes added to the project, thus they are liabilities. Including one is always a bad decision, so I only do so because the alternative is worse. Having to choose between two is more like between Scylla and Charybdis (known tradeoffs) than deciding to go left or right in a maze (mystery outcome).

fluidcruft 28 minutes ago | parent [-]

It probably depends on what you're working on. For the most part relying on a high-quality library/module that already implements a solution is less code to maintain. Any problems with the shared code can be fixed upstream with more eyeballs and more coverage than anything I build locally. I prefer to keep my eyeballs on things most related to my domain and not maintain stuff that's both ultimately not terribly important and replaceable (if push comes to shove).

Generally, you are correct that having multiple libraries to choose among is concerning, but it really depends. Mostly it's stylistic choices and it can be hard to tell how it integrates before trying.

nottorp 3 hours ago | parent | prev | next [-]

Just be careful if functionality varies between library y version 2 and library y version 3, or if there is a similarly named library y2 that isn't the same.

You may get possibilities, but not for what you asked for.

pdntspa 3 hours ago | parent [-]

If you run to the point where you can execute each idea and examine its outputs, problems like that surface pretty quickly

nottorp 3 hours ago | parent [-]

Of course, by that time i could have read the docs for library y the version I'm using...

pdntspa 3 hours ago | parent [-]

There are many roads to Rome...

georgemcbay 3 hours ago | parent | prev [-]

> Yeah, I sometimes use AI for questions like "is it possible to do [x] using library [y] and if so, how?" and have received mostly solid answers.

In my experience most LLMs are going to answer this with some form of "Absolutely!" and then propose a square-peg-into-a-round-hole way to do it that is likely suboptimal vs using a different library that is far more suited to your problem if you didn't guess the right fit library to begin with.

The sycophancy problem is still very real even when the topic is entirely technical.

Gemini is (in my experience) the least likely to lead you astray in these situations but its still a significant problem even there.

jessoteric 6 minutes ago | parent [-]

IME this has been significantly reduced in newer models like 4.5 Opus and to a lesser extent Sonnet, but agree it's still sort of bad- mainly because the question you're posing is bad.

if you ask a human this the answer can also often be "yes [if we torture the library]", because software development is magic and magic is the realm of imagination.

much better prompt: "is this library designed to solve this problem" or "how can we solve this problem? i am considering using this library to do so, is that realistic?"

pdntspa 4 hours ago | parent | prev | next [-]

Me writing code is me spending 3/4 of my time wading through documentation and google searches. It's absolutely hell on my ADD. My ability to memorize is absolutely garbage. Throughout my career I've worked in like 10 different languages, and in any given project I'm usually working in at least 3 or 4. There's a lot of "now what is a map operation in this stupid fucking language called again?!"

Claude writing code gets the same output if not better in about 1/10 of the time.

That's where you realize that the writing code bits are just one small part of the overall picture. One that I realize I could do without.

n4r9 4 hours ago | parent | next [-]

May be a domain issue? If you're largely coding within a JS framework (which most software devs are tbf) then that makes total sense. If you're working in something like fintech or games, perhaps less so.

pdntspa 4 hours ago | parent [-]

My last job was a mix of Ruby, Python, Bash, SQL, and Javascript (and CSS and HTML). One or two jobs before that it was all those plus a smattering of C. A few jobs before that it was C# and Perl.

skydhash 3 hours ago | parent | prev | next [-]

I would say notetaking would be a much bigger help than Claude at this point. There's a lot of methods to organize information that I believe would help you, better than an hallucination machine.

neoromantique 3 hours ago | parent [-]

Notetaking with ADHD is another sort of hell to be honest.

I absolutely can attest to what parent is saying, I have been developing software in Python for nearly a decade now and I still routinely look up the /basics/.

LLM's have been a complete gamechanger to me, being able to reduce the friction of "ok let me google what I need in a very roundabout way my memory spit it out" to a fast and often inline llm lookup.

skydhash an hour ago | parent [-]

Looking up documentation is normal. If not, we wouldn't have the manual pages in Unix and such an emphasis on documentation in ecosystems like Lisp, Go, Python, Perl,... We even have cheatsheets and syntax references books because it's just so easy to forget the /basics/.

I said notetaking, but it's more about building your own index. In $WORK projects, I mostly use the browser bookmarks, the ticket system, the PR description and commits to contextually note things. In personal projects, I have an org-mode file (or a basic text file) and a lot of TODO comments.

tayo42 4 hours ago | parent | prev [-]

How do you end up with 3 to 4 languages in one project?

jessoteric 4 minutes ago | parent | next [-]

i find it's pretty rare to have a project that only consists of one or two languages, over a certain complexity/feature threshold

saulpw 4 hours ago | parent | prev | next [-]

Typescript on the frontend, Python on the backend, SQL for the database, bash for CI. This isn't even counting HTML/CSS or the YAML config.

tayo42 3 hours ago | parent [-]

I wouldn't call html, yaml or css languages.

Same for sql, do you really context switch between sql and other code that frequently?

Everyone should stop using bash, especially if you have a scripting language you can use already.

wosat 39 minutes ago | parent | next [-]

Sorry for being pedantic, but what does the "L" stand for in HTML, YAML, SQL? They may not be "programming languages" or, in the case of SQL, a "general purpose programming language", but they are indeed languages.

pdntspa 3 hours ago | parent | prev [-]

Dude have you even written any hardcore SQL? plpgSQL is very much a turing-complete language

merely-unlikely 3 hours ago | parent | prev | next [-]

Recently I've been experimenting with using multiple languages in some projects where certain components have a far better ecosystem in one language but the majority of the project is easier to write in a different one.

For example, I often find Python has very mature and comprehensive packages for a specific need I have, but it is a poor language for the larger project (I also just hate writing Python). So I'll often put the component behind a http server and communicate that way. Or in other cases I've used Rust for working with WASAPI and win32 which has some good crates for it, but the ecosystem is a lot less mature elsewhere.

I used to prefer reinventing the wheel in the primary project language, but I wasted so much time doing that. The tradeoff is the project structure gets a lot more complicated, but it's also a lot faster to iterate.

Plus your usual html/css/js on the frontend and something else on the backend, plus SQL.

zelphirkalt an hour ago | parent | prev | next [-]

3 or 4 can very easily accumulate. For example: HTML, CSS as must know, plus some JS/TS (actually that's 2 langs!) for sprinkles of interactivity, backend in any proper backend language. Oh wait, there is a fifth language, SQL, because we need to access the database. Ah and those few shell scripts we need? Someone's gotta write those too. They may not always be full programming languages, but languages they are, and one needs to know them.

tomgp 3 hours ago | parent | prev | next [-]

HTML, CSS, Javascript?

pdntspa 3 hours ago | parent | prev [-]

Oh my sweet summer child...

loloquwowndueo 4 hours ago | parent | prev [-]

“I want my AI to do laundry and dishes so I can code, not for my AI to code so I can do laundry and dishes”

thewebguyd 3 hours ago | parent | next [-]

This sums up my feelings almost exactly.

I don't want LLMs, AI, and eventually Robots to take over the fun stuff. I want them to do the mundane, physical tasks like laundry and dishes, leave me to the fun creative stuff.

But as we progress right now, the hype machine is pushing AI to take over art, photography, video, coding, etc. All the stuff I would rather be doing. Where's my house cleaning robot?

zelphirkalt an hour ago | parent [-]

I would like to go even further and say: Those things, art, photography, video, coding ... They are forms of craft, human expression, creativity. They are part of what makes life interesting. So we are in the process of eliminating the interesting and creative parts, in the name of profit and productivity maxing (if any!). Maybe we can create the 100th online platform for the same thing soon 10x faster! Wow!

Of course this is a bit too black&white. There can still be a creative human being introducing nuance and differences, trying to get the automated tools to do things different in the details or some aspects. Question is, losing all those creative jobs (in absolute numbers of people doing them), what will we as society, or we as humanity become? What's the ETA on UBI, so that we can reap the benefits of what we automated away, instead of filling the pockets of a few?

minimaxir 4 hours ago | parent | prev | next [-]

Claude is very good at unfun-but-necessary coding tasks such as writing docstrings and type hints, which is a prominent instance of "laundry and dishes" for a dev.

loloquwowndueo 3 hours ago | parent | next [-]

“Sorry, the autogenerated api documentation was wrong because the ai hallucinated the docstring”

mrguyorama 3 hours ago | parent | prev [-]

>writing docstrings and type hints

Disagree. Claude makes the same garbage worthless comments as a Freshman CS student. Things like:

// Frobbing the bazz

res = util.frob(bazz);

Or

// If bif is True here then blorg

if (bif){ blorg; }

Like wow, so insightful

And it will ceaselessly try to auto complete your comments with utter nonsense that is mostly grammatically correct.

The most success I have had is using claude to help with Spring Boot annotations and config processing (Because documentation is just not direct enough IMO) and to rubber duck debug with, where claude just barely edges out the rubber duck.

minimaxir 3 hours ago | parent [-]

I intentionally said docstrings instead of comments. Comments by default can be verbose on agents but a line in the AGENTS.md does indeed wrangle modern agents to only comment on high signal code blocks that are not tautological.

moffkalast 2 hours ago | parent | prev | next [-]

Well it would be funnier if dishwashers, washing machines and dryers didn't automate that ages ago. It's literally one of the first things robots started doing for us.

re-thc 4 hours ago | parent | prev [-]

Soon you'll realize you're the "AI". We've lost control.

AStrangeMorrow 3 hours ago | parent | prev | next [-]

Yeah at this point I basically have to dictate all implementation details: do this, but do it this specific way, handle xyz edge cases by doing that, plug the thing in here using that API. Basically that expands 10 lines into 100-200 lines of code.

However if I just say “I have this goal, implement a solution”, chances are that unless it is a very common task, it will come up with a subpar/incomplete implementation.

What’s funny to me is that complexity has inverted for some tasks: it can ace a 1000 lines ML model for a general task I give it, yet will completely fail to come up with a proper solution for a 2D geometric problem that mostly has high school level maths that can be solved in 100 lines

rootnod3 4 hours ago | parent | prev | next [-]

Cool cool cool. So if you use LLMs as junior devs, let me ask you how future awesome senior devs like you will come around? From WHAT job experience? From what coding struggle?

eightysixfour 3 hours ago | parent | next [-]

What would you like individual contributors to do about it, exactly? Refuse to use it, even though this person said they're happier and more fulfilled at work?

I'm asking because I legitimately have not figured out an answer to this problem.

fluidcruft 3 hours ago | parent | prev | next [-]

How do you get junior devs if your concept of the LLM is that it's "a principal engineer" that "do[es] not ask [you] any questions"?

Also, I'm pretty sure junior devs can use directing a LLM to learn from mistakes faster. Let them play. Soon enough they're going to be better than all of us anyway. The same way widespread access to strong chess computers raised the bar at chess clubs.

rootnod3 3 hours ago | parent [-]

I don't think the chess analogy grabs here. In chess, you play _against_ the chess computer. Take the same approach and let the chess computer play FOR the player and see how far he gets.

fluidcruft 2 hours ago | parent [-]

Maybe. I don't think adversarial vs not is as important as gaining experience. Ultimately both are problem solving tasks and learning instincts about which approaches work best in certain situations.

I'm probably a pretty shitty developer by HN standards but I generally have to build a prototype to fully understand and explore problem and iterate designs and LLMs have been pretty good for me as trainers for learning things I'm not familiar with. I do have a certain skill set, but the non-domain stuff can be really slow and tedious work. I can recognize "good enough" and "clean" and I think the next generation can use that model very well to be become native with how to succeed with these tools.

Let me put it this way: people don't have to be hired by the best companies to gain experience using best practices anymore.

platevoltage an hour ago | parent | prev | next [-]

There's that long term thinking that the tech industry, and really every other publicly traded company is known for.

pdntspa 3 hours ago | parent | prev | next [-]

My last job there was effectively a gun held to the back of my head, ordering me to use this stuff. And this started about a year ago, when the tooling for agentic dev was absolutely atrocious, because we had a CTO who had the biggest most raging boner for anything that offered even a whiff of "AI".

Unfortunately the bar is being raised on us. If you can't hang with the new order you are out of a job. I promise I was one of the holdouts who resisted this the most. It's probably why I got laid off last spring.

Thankfully, as of this last summer, agentic dev started to really get good, and my opinion made a complete 180. I used the off time to knock out a personal project in a month or two's worth of time, that would have taken me a year+ the old way. I leveraged that experience to get me where I am now.

rootnod3 3 hours ago | parent [-]

Ok, now assume you start relying on it and let's assume cloud flare has another outage. You just go and clock out for the day saying "can't work, agent is down"?

I don't think we'll be out of jobs. Maybe temporarily. But those jobs come back. The energy and money drain that LLMs are, are just not sustainable.

I mean, it's cool that you got the project knocked out in a month or two, but if you'd sit down now without an LLM and try to measure the quality of that codebase, would you be 100% content? Speed is not always a good metric. Sure, 1 -2 months for a project is nice, but isn't especially a personal project more about the fun of doing the project and learning something from it and sharpening your skills?

pdntspa 2 hours ago | parent [-]

When the POS system goes down at a restaurant they'll revert to pen and paper. Can't imagine its much different in that case.

bpt3 3 hours ago | parent | prev [-]

Why is that a developer's problem? If anything, they are incentivized to avoid creating future competition in the job market.

rootnod3 3 hours ago | parent [-]

It's not a problem for the senior dev directly, but maybe down the road. And it definitely is a problem for the company once said senior dev leaves or retires.

Seriously, long term thinking went out the window long time ago, didn't it?

order-matters an hour ago | parent | prev | next [-]

I wonder if DRY is still a principle worth holding onto in the AI coding era. I mean it probably is, but this feels like enough of a shift in coding design that re-evaluating principles designed for human-only coding might be worth the effort

tiku 2 hours ago | parent | prev | next [-]

I enjoy finding the problem and then telling Claude to fix it. Specifying the function and the problem. Then going to get a coffee from the breakroom to see it finished when I return. The junior dev has questions when I did that. Claude just fixes it.

mjr00 4 hours ago | parent | prev | next [-]

> That's why you treat it like a junior dev. You do the fun stuff of supervising the product, overseeing design and implementation, breaking up the work, and reviewing the outputs. It does the boring stuff of actually writing the code.

I am so tired of this analogy. Have the people who say this never worked with a junior dev before? If you treat your junior devs as brainless code monkeys who only exist to type out your brilliant senior developer designs and architectures instead of, you know, human beings capable of solving problems, 1) you're wasting your time, because a less experienced dev is still capable of solving problems independently, 2) the juniors working under you will hate it because they get no autonomy, and 3) the juniors working under you will stay junior because they have no opportunity to learn--which means you've failed at one of your most important tasks as a senior developer, which is mentorship.

pdntspa 3 hours ago | parent [-]

I have mentored and worked with a junior dev. And the only way to get her to do anything useful and productive was to spell things out. Otherwise she got wrapped around the axle trying to figure out the complex things and was constantly asking for my help with basic design-level tasks. Doing the grunt work is how you learn the higher-level stuff.

When I was a junior, that's how it was for me. The senior gave me something that was structured and architected and asked me to handle smaller tasks that were beneath them.

Giving juniors full autonomy is a great way to end up with an unmaintainable mess that is a nightmare to work with without substancial refactoring. I know this because I have made a career out of fixing exactly this mistake.

mjr00 3 hours ago | parent [-]

I have never worked with junior devs as incompetent as you describe, having worked at AWS, Splunk/Cisco, among others. At AWS even interns essentially got assigned a full project for their term and were just told to go build it. Does your company just have an absurdly low hiring bar for juniors?

> Giving juniors full autonomy is a great way to end up with an unmaintainable mess that is a nightmare to work with without substancial refactoring.

Nobody is suggesting they get full autonomy to cowboy code and push unreviewed changes to prod. Everything they build should be getting reviewed by their peers and seniors. But they need opportunities to explore and make mistakes and get feedback.

pdntspa 3 hours ago | parent [-]

> AWS, Splunk/Cisco

It's an entirely different world in small businesses that aren't primarily tech.

alfalfasprout 4 hours ago | parent | prev [-]

I really hope you don't actually treat junior devs this way...

order-matters 2 hours ago | parent | prev | next [-]

TBH I think its ability to structure unstructured data is what makes it a powerhouse tool and there is so much juice to squeeze there that we can make process improvements for years even if it doesnt get any better at general intelligence.

If I had a pdf printout of a table, the workflow i used to have to use to get that back into a table data structure to use for automation was hard (annoying). dedicated OCR tools with limitations on inputs, multiple models in that tool for the different ways the paper the table was on might be formatted. it took hours for a new input format

now i can take a photo of something with my phone and get a data table in like 30 seconds.

people seem so desperate to outsource their thinking to these models and operating at the limits of their capability, but i have been having a blast using it to cut through so much tedium that werent unsolved problems but required enough specialized tooling and custom config to be left alone unless you really had to

this fits into what youre saying with using it to do the grunt work i find boring i suppose, but feels a little bit more than that - like it has opened a lot of doors to spaces that had grunt work that wasnt worth doing for the end result previously but now it is

ericmcer 33 minutes ago | parent | prev | next [-]

Exactly, if you visualize software as a bunch separate "states" (UI state, app state, DB state) then our job is to mutate states and synchronize those mutations across the system. LLMs are good at mutating a specific state in a specific way. They are trash at designing what data shape a state should be, and they are bad at figuring out how/why to propagate mutations across a system.

dolftax 27 minutes ago | parent | prev | next [-]

The structured vs open-ended distinction here applies to code review too. When you ask an LLM to "find issues in this code", it'll happily find something to say, even if the code is fine. And when there are actual security vulnerabilities, it often gets distracted by style nitpicks and misses the real issues.

Static analysis has the opposite problem - very structured, deterministic, but limited to predefined patterns and overwhelms you in false positives.

The sweet spot seems to be to give structure to what the LLM should look for, rather than letting it roam free on an open-ended "review this" prompt.

We built Autofix Bot[1] around this idea.

[1] https://autofix.bot (disclosure: founder)

mbesto 4 hours ago | parent | prev | next [-]

> There's a significant blind-spot in current LLMs related to blue-sky thinking and creative problem solving. It can do structured problems very well, and it can transform unstructured data very well, but it can't deal with unstructured problems very well.

While this is true in my experience, the opposite is not true. LLMs are very good at helping me go through a structure processing of thinking about architectural and structural design and then help build a corresponding specification.

More specifically the "idea honing" part of this proposed process works REALLY well: https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/

This: Each question should build on my previous answers, and our end goal is to have a detailed specification I can hand off to a developer. Let’s do this iteratively and dig into every relevant detail. Remember, only one question at a time.

skydhash 3 hours ago | parent [-]

I've checked the linked page and there's nothing about even learning the domain or learning the tech platform you're going to use. It's all blind faith, just a small step above copying stuff from GitHub or StackOverflow and pushing it to prod.

asmor 4 hours ago | parent | prev | next [-]

This is it. It doesn't replace the higher level knowledge part very well.

I asked Claude to fix a pet peeve of mine, spawning a second process inside an existing Wine session (pretty hard if you use umu, since it runs in a user namespace). I asked Claude to write me a python server to spawn another process to pass through a file handler "in Proton", and it proceeded a long loop of trying to find a way to launch into an existing wine session from Linux with tons of environment variables that didn't exist.

Then I specified "server to run in Wine using Windows Python" and it got more things right. Except it tried to use named pipes for IPC. Which, surprise surprise, doesn't work to talk to the Linux piece. Only after I specified "local TCP socket" it started to go right. Had I written all those technical constraints and made the design decisions in the first message it'd have been a one-hit success.

james_marks 4 hours ago | parent | prev | next [-]

This is a key part of the AI love/hate flame war.

Very easy to write it off when it spins out on the open-ended problems, without seeing just how effective it can be once you zoom in.

Of course, zooming in that far gives back some of the promised gains.

Edit: typo

thewebguyd 4 hours ago | parent | next [-]

> without seeing just how effective it can be once you zoom in.

The love/hate flame war continues because the LLM companies aren't selling you on this. The hype is all about "this tech will enable non-experts to do things they couldn't do before" not "this tech will help already existing experts with their specific niche," hence the disconnect between the sales hype and reality.

If OpenAI, Anthropic, Google, etc. were all honest and tempered their own hype and misleading marketing, I doubt there would even be a flame war. The marketing hype is "this will replace employees" without the required fine print of "this tool still needs to be operated by an expert in the field and not your average non technical manager."

hombre_fatal 4 hours ago | parent [-]

The amount of GUIs I've vibe-coded works against your claim.

As we speak, my macOS menubar has an iStat Menus replacement, a Wispr Flow replacement (global hotkey for speech-to-text), and a logs visualizer for the `blocky` dns filtering program -- all of which I built without reading code aside from where I was curious.

It was so vibe-coded that there was no reason to use SwiftUI nor set them up in Xcode -- just AppKit Swift files compiled into macOS apps when I nix rebuild.

The only effort it required was the energy to QA the LLM's progress and tell it where to improve, maybe click and drag a screenshot into claude code chat if I'm feeling excessive.

Where do my 20 years of software dev experience fit into this except beyond imparting my aesthetic preferences?

In fact, insisting that you write code yourself is becoming a liability in an interesting way: you're going to make trade-offs for DX that the LLM doesn't have to make, like when you use Python or Electron when the LLM can bypass those abstractions that only exist for human brains.

bopbopbop7 4 hours ago | parent | next [-]

You making a couple of small GUIs that could have been made with a drag and drop editor 10 years ago doesn't work against his claim as much as you think. You're just telling on your self and your "20 years" of supposed dev experience.

hombre_fatal 3 hours ago | parent [-]

Dragging UI components into a WYSIWYG editor is <1% of building an app.

Else Visual Basic and Dreamweaver would have killed software engineering in the 90s.

Also, I didn't make them. A clanker did. I can see this topic brings out the claws. Honestly I used to have the same reaction, and in a large way I still hate it.

bopbopbop7 3 hours ago | parent [-]

It's not bringing out claws, it's just causing certain developers to out themselves.

hombre_fatal 2 hours ago | parent [-]

Outs me as what, exactly?

I'm not sure you're interacting with single claim I've made so far.

onethought 4 hours ago | parent | prev [-]

Love that you are disagreeing with parent by saying you built software all on your own, and you only had 20 years software experience.

Isn't that the point they are making?

hombre_fatal 4 hours ago | parent [-]

Maybe I didn't make it clear, but I didn't build the software in my comment. A clanker did.

Vibe-coding is a claude code <-> QA loop on the end result that anyone can do (the non-experts in his claim).

An example of a cycle looks like "now add an Options tab that let's me customize the global hotkey" where I'm only an end-user.

Once again, where do my 20 years of software experience come up in a process where I don't even read code?

thewebguyd 3 hours ago | parent | next [-]

> An example of a cycle looks like "now add an Options tab that let's me customize the global hotkey" where I'm only an end-user

Which is a prompt that someone with experience would write. Your average, non-technical person isn't going to prompt something like that, they are going to say "make it so I can change the settings" or something else super vague and struggle. We all know how difficult it is to define software requirements.

Just because an LLM wrote the actual code doesn't mean your prompts weren't more effective because of your experience and expertise in building software.

Sit someone down in front of an LLM with zero development or UI experience at all and they will get very different results. Chances are they won't even specify "macOS menu bar app" in the prompt and the LLM will end up trying to make them a webapp.

Your vibe coding experience just proves my initial point, that these tools are useful for those who already have experience and can lean on that to craft effective prompts. Someone non-technical isn't going to make effective use of an LLM to make software.

hombre_fatal 2 hours ago | parent | next [-]

Counter point: https://news.ycombinator.com/item?id=46234943

Your original claim:

> The hype is all about "this tech will enable non-experts to do things they couldn't do before"

Are you saying that a prompt like "make a macOS weather app for me" and "make an options menu that lets me set my location" are only something an expert can do?

I need to know what you think their expertise is in.

ModernMech 3 hours ago | parent | prev [-]

Here's how I look at it as a roboticist:

The LLM prompt space is an ND space where you can start at any point, and then the LLM carves a path through the space for so many tokens using the instructions you provided, until it stops and asks for another direction. This frames LLM prompt coding as a sort of navigation task.

The problem is difficult because at every decision point, there's an infinite number of things you could say that could lead to better or worse results in the future.

Think of a robot going down the sidewalk. It controls itself autonomously, but it stops at every intersection and asks "where to next boss?" You can tell it either to cross the street, or drive directly into traffic, or do any number of other things that could cause it to get closer to its destination, further away, or even to obliterate itself.

In the concrete world, it's easy to direct this robot, and to direct it such that it avoids bad outcomes, and to see that it's achieving good outcomes -- it's physically getting closer to the destination.

But when prompting in an abstract sense, its hard to see where the robot is going unless you're an expert in that abstract field. As an expert, you know the right way to go is across the street. As a novice, you might tell the LLM to just drive into traffic, and it will happily oblige.

The other problem is feedback. When you direct the physical robot to drive into traffic, you witness its demise, its fate is catastrophic, and if you didn't realize it before, you'd see the danger then. The robot also becomes incapacitated, and it can't report falsely about its continued progress.

But in the abstract case, the LLM isn't obliterated, it continues to report on progress that isn't real, and as a non expert, you can't tell its been flattened into a pancake. The whole output chain is now completely and thoroughly off the rails, but you can't see the smoldering ruins of your navigation instructions because it's told you "Exactly, you're absolutely right!"

onethought 4 hours ago | parent | prev [-]

But anyone didn't do it... you an expert in software development did it.

I would hazard a guess that your knowledge lead to better prompts, better approach... heck even understanding how to build a status bar menu on Mac OS is slightly expert knowledge.

You are illustrating the GP's point, not negating it.

hombre_fatal 3 hours ago | parent [-]

> I would hazard a guess that your knowledge lead to better prompts, better approach... heck even understanding how to build a status bar menu on Mac OS is slightly expert knowledge.

You're imagining that I'm giving Claude technical advice, but that is the point I'm trying to make: I am not.

This is what "vibe-coding" tries to specify.

I am only giving Claude UX feedback from using the app it makes. "Add a dropdown that lets me change the girth".

Now, I do have a natural taste for UX as a software user, and through that I can drive Claude to make a pretty good app. But my software engineering skills are not utilized... except for that one time I told Claude to use an AGDT because I fancy them.

ModernMech 3 hours ago | parent [-]

My mother wouldn't be able to do what you did. She wouldn't even know where to start despite using LLMs all the time. Half of my CS students wouldn't know where to start either. None of my freshman would. My grad students can do this but not all of them.

Your 20 years is assisting you in ways you don't know; you're so experienced you don't know what it means to be inexperienced anymore. Now, it's true you probably don't need 20 years to do what you did, but you need some experience. Its not that the task you posed to the LLM is trivial for everyone due to the LLM, its that its trivial for you because you have 20 years experience. For people with experience, the LLM makes moderate tasks trivial, hard tasks moderate, and impossible tasks technically doable.

For example, my MS students can vibe code a UI, but they can't vibe code a complete bytecode compiler. They can use AI to assist them, but it's not a trivial task at all, they will have to spend a lot of time on it, and if they don't have the background knowledge they will end up mired.

hombre_fatal 2 hours ago | parent [-]

The person at the top of the thread only made a claim about "non-experts".

Your mom wouldn't vibe-code software that she wants not because she's not a software engineer, but because she doesn't engage with software as a user at the level where she cares to do that.

Consider these two vibe-coded examples of waybar apps in r/omarchy where the OP admits he has zero software experience:

- Weather app: https://www.reddit.com/r/waybar/comments/1p6rv12/an_update_t...

- Activity monitor app: https://www.reddit.com/r/omarchy/comments/1p3hpfq/another_on...

That is a direct refutation of OP's claim. LLM enabled a non-expert to build something they couldn't before.

Unless you too think there exists a necessary expertise in coming up with these prompts:

- "I want a menubar app that shows me the current weather"

- "Now make it show weather in my current location"

- "Color the temperatures based on hot vs cold"

- "It's broken please find out why"

Is "menubar" too much expertise for you? I just asked claude "what is that bar at the top of my screen with all the icons" and it told me that it's macOS' menubar.

bopbopbop7 an hour ago | parent | next [-]

Your best examples of non-experts are two Linux power users?

ModernMech 20 minutes ago | parent | prev [-]

I didn't make clear I was responding to your question:

"Where do my 20 years of software dev experience fit into this except beyond imparting my aesthetic preferences?"

Anyway, I think you kind of unintentionally proved my point. These two examples are pretty trivial as far as software goes, and it enabled someone with a little technical experience to write them.

They work well because:

a) the full implementation for these apps don't even fill up the AI context window. It's easy to keep the LLM on task.

b) it's a tutorial style-app that people often write as "babby's first UI widget", so there are thousands of examples of exactly this kind of thing online; therefore the LLM has little trouble summoning the correct code in its entirety.

But still, someone with zero technical experience is going to be immediately thwarted by the prompts you provided.

Take the first one "I want a menubar app that shows me the current weather".

https://chatgpt.com/share/693b20ac-dcec-8001-8ca8-50c612b074...

ChatGPT response: "Nice — here's a ready-to-run macOS menubar app you can drop into Xcode..."

She's already out of her depth by word 11. You expect your mom to use Xcode? Mine certainly can't. Even I have trouble with Xcode and I use it for work. Almost every single word in that response would need to be explained to her, it might as well be a foreign language.

Now, the LLM could help explain it to her, and that's what's great about them. But by the time she knows enough to actually find the original response actionable, she would have gained... knowledge and experience enough to operate it just to the level of writing that particular weather app. Though having done that, it's still unreasonable to now believe she could then use the LLM to write a bytecode compiler, because other people who have a Ph.D. in CS can. The LLM doesn't level the playing field, it's still lopsided toward the Ph.D.s / senior devs with 20 years exp.

hombre_fatal 4 hours ago | parent | prev [-]

Go one level up:

    claude2() {
      claude "$(claude "Generate a prompt and TODO list that works towards this goal: <goal>$*</goal>" -p)"
    }

    $ claude2 pls give ranked ideas for make code better
d-lisp 2 hours ago | parent | prev | next [-]

I remember about a problem I had while quick testing notcurses. I tried chatGPT which produced a lot of weird but kinda believable statements about the fact that I had to include wchar and define a specific preprocessor macro, AND I had to place the includes for notcurses, other includes and macros in a specific order.

My sentiment was "that's obviously a weird non-intended hack" but I wanted to test quickly, and well ... it worked. Later, reading the man-pages I aknowledged the fact that I needed to declare specific flags for gcc in place of the gpt advised solution.

I think these kind of value based judgements are hard to emulate for LLMs, it's hard for them to identifiate a single source as the most authoritative source in a sea of lesser authoritative (but numerous) sources.

plufz 4 hours ago | parent | prev | next [-]

I think slash commands are great to help Claude with this. I have many like /code:dry /code:clean-code etc that has a semi long prompt and references to longer docs to review code from a specific perspective. I think it atleast improves Claude a bit in this area. Like processes or templates for thinking in broader ways. But yes I agree it struggles a lot in this area.

airstrike 4 hours ago | parent [-]

Somewhat tangential but interestingly I'd hate for Claude to make any changes with the intent of sticking to "DRY" or "Clean Code".

Neither of those are things I follow, and either way design is better informed by the specific problems that need to be solved rather than by such general, prescriptive principles.

plufz an hour ago | parent | next [-]

I agree, so obviously I direct it with more info and point it to code that I believe needs more of specific principles. But generally I would like Claude to produce more DRY code, it is great at reimplementing the same thing in five places instead of making a shared utility module.

airstrike 15 minutes ago | parent [-]

I see, and I definitely agree with that last statement. It tends to rewrite stuff. I feel like it should pay me back 10,000 tokens each time it increases the API surface

SketchySeaBeast 3 hours ago | parent | prev [-]

I'm not sure how to interpret someone saying they don't follow DRY. Do you meant taking it to the Zealous extreme, or do you abhor helper functions? Is this a "No True Scottsman" thing?

airstrike an hour ago | parent | next [-]

I just think DRY is overblown. I just let code grow. When parts of it become obvious to abstract, I refactor them into something self contained. I learned this from an ice wizard.

When I was younger, writing Python rather than Rust, I used to go out of my way to make everything DRY, DRY, DRY everywhere from the outset. Class-based views in Django come to mind.

Today, I just write code, and after it's working I go back and clean things up where applicable. Not because I'm "following a principle", but because it's what makes sense in that specific instance.

Pannoniae 2 hours ago | parent | prev [-]

Not GP but I can strongly relate to it. Most of the programming I do is related to me making a game.

I follow WET principles (write everything twice at least) because the abstraction penalty is huge, both in terms of performance and design, a bad abstraction causes all subsequent content to be made much slower. Which I can't afford as a small developer.

Same with most other "clean code" principles. My codebase is ~70K LoC right now, and I can keep most of it in my head. I used to try to make more functional, more isolated and encapsulated code, but it was hard to work with and most importantly, hard to modify. I replaced most of it with global variables, shit works so much better.

I do use partial classes pretty heavily though - helps LLMs not go batshit insane from context overload whenever they try to read "the entire file".

Models sometimes try to institute these clean code practices but it almost always just makes things worse.

SketchySeaBeast 2 hours ago | parent [-]

OK, I can follow WET before you DRY, to me that's just a non-zealous version of Don't Repeat Yourself.

I think, if you're writing code where you know the entire code base, a lot of the clean principles seem less important, but once you get someone who doesn't, and that can be you coming back to the project in three months, suddenly they have value.

cyral 4 hours ago | parent | prev | next [-]

Using the plan mode in cursor (or asking claude to first come up with a plan) makes it pretty good at generic "how can I improve" prompts. It can spend more effort exploring the codebase and thinking before implementing.

andai 2 hours ago | parent | prev | next [-]

The current paradigm is we sorta-kinda got AGI by putting dodgy AI in a loop:

until works { try again }

The stuff is getting so cheap and so fast... a sufficient increment in quantity can produce a phase change in quality.

giancarlostoro 4 hours ago | parent | prev | next [-]

> "Hey claude, I get this error message: <X>", and it'll often find the root cause quicker than I could.

This is true, as for "Open Ended" I use Beads with Claude code, I ask it to identify things based on criteria (even if its open ended) then I ask it to make tasks, then when its done I ask it to research and ask clarifying questions for those tasks. This works really well.

ludicrousdispla 2 hours ago | parent | prev | next [-]

>> "Hey claude, I get this error message: <X>", and it'll often find the root cause quicker than I could.

Back in the day, we would just do this with a search engine.

cultofmetatron 3 hours ago | parent | prev | next [-]

> There's a significant blind-spot in current LLMs related to blue-sky thinking and creative problem solving.

thats called job security!

fudged71 4 hours ago | parent | prev | next [-]

This tells me that we need to build 1000 more linters of all kinds

xnorswap 4 hours ago | parent [-]

Unironically I agree.

One under-discussed lever that senior / principal engineers can pull is the ability to write linters & analyzers that will stop junior engineers ( or LLMs ) from doing something stupid that's specific to your domain.

Let's say you don't want people to make async calls while owning a particular global resource, it only takes a few minutes to write an analyzer that will prevent anyone from doing so.

Avoid hours of back-and-forth over code review by encoding your preferences and taste into your build pipeline and stop it at source.

jmalicki 4 hours ago | parent [-]

And for more complex linters I find that it can be easy to get the LLM to write most of it itself!!!

kccqzy 4 hours ago | parent | prev [-]

Not at all my experience. I’ve often tried things like telling Claude this SIMD code I wrote performed poorly and I needed some ideas to make it go faster. Claude usually does a good job rewriting the SIMD to use different and faster operations.

mainmailman 4 hours ago | parent | next [-]

I'm not a C++ programmer, but wouldn't your example be a fairly structured problem? You wanted to improve performance of a specific part of your code base.

zahlman 4 hours ago | parent | prev [-]

That sounds like a pretty "structured" problem to me.

chrneu 4 hours ago | parent | next [-]

that's one of the problems with AI. as it can accomplish more tasks people will overestimate it's ability.

what the person you replied to had claude do is relatively simple and structured, but to that person what claude did is "automagic".

People already vastly overestimate AI's capabilities. This contributes to that.

kccqzy 3 hours ago | parent | prev [-]

Performance optimization isn’t structured at all. I find it amazing that without access to profilers or anything Claude is able to respond to “anything I can do to improve the speed” with acceptable results.

postalcoder 4 hours ago | parent | prev | next [-]

One of my favorite personal evals for llms is testing its stability as a reviewer.

The basic gist of it is to give the llm some code to review and have it assign a grade multiple times. How much variance is there in the grade?

Then, prompt the same llm to be a "critical" reviewer with the same code multiple times. How much does that average critical grade change?

A low variance of grades across many generations and a low delta between "review this code" and "review this code with a critical eye" is a major positive signal for quality.

I've found that gpt-5.1 produces remarkably stable evaluations whereas Claude is all over the place. Furthermore, Claude will completely [and comically] change the tenor of its evaluation when asked to be critical whereas gpt-5.1 is directionally the same while tightening the screws.

You could also interpret these results to be a proxy for obsequiousness.

Edit: One major part of the eval i left out is "can an llm converge on an 'A'?" Let's say the llm gives the code a 6/10 (or B-). When you implement its suggestions and then provide the improved code in a new context, does the grade go up? Furthermore, can it eventually give itself an A, and consistently?

It's honestly impressive how good, stable, and convergent gpt-5.1 is. Claude is not great. I have yet to test it on Gemini 3.

lemming 2 hours ago | parent | next [-]

I agree, I mostly use Claude for writing code, but I always get GPT5 to review it. Like you, I find it astonishingly consistent and useful, especially compared to Claude. I like to reset my context frequently, so I’ll often paste the problems from GPT into Claude, then get it to review those fixes (going around that loop a few times), then reset the context and get it to do a new full review. It’s very reassuring how consistent the results are.

adastra22 3 hours ago | parent | prev | next [-]

You mean literally assign a grade, like B+? This is unlikely to work based on how token prediction & temperature works. You're going to get a probability distribution in the end that is reflective of the model runtime parameters, not the intelligence of the model.

OsrsNeedsf2P 3 hours ago | parent | prev | next [-]

How is this different than testing the temperature?

smt88 2 hours ago | parent [-]

It isn't, and it reflects how deeply LLMs are misunderstood, even by technical people

guluarte 4 hours ago | parent | prev [-]

my experience reviewing pr is that sometimes it says it is perfect with some nipicks and other times the same pr that it is trash and need a lot of work

elzbardico 4 hours ago | parent | prev | next [-]

LLMs have this strong bias towards generating code, because writing code is the default behavior from pre-training.

Removing code, renaming files, condensing, and other edits is mostly a post-training stuff, supervised learning behavior. You have armies of developers across the world making 17 to 35 dollars an hour solving tasks step by step which are then basically used to generate prompt/responses pairs of desired behavior for a lot of common development situations, adding desired output for things like tool calling, which is needed for things like deleting code.

A typical human working on post-training dataset generation task would involve a scenario like: given this Dockerfile for a python application, when we try to run pytest it fails with exception foo not found. The human will notice that package foo is not installed, change the requirements.txt file and write this down, then he will try pip install, and notice that the foo package requires a certain native library to be installed. The final output of this will be a response with the appropriate tool calls in a structured format.

Given that the amount of unsupervised learning is way bigger than the amount spent on fine-tuning for most models, it is not surprise that given any ambiguous situation, the model will default to what it knows best.

More post-training will usually improve this, but the quality of the human generated dataset probably will be the upper bound of the output quality, not to mention the risk of overfitting if the foundation model labs embrace SFT too enthusiastically.

hackernewds 2 hours ago | parent [-]

> Writing code is the default behavior from pre-training

what does this even mean? could you expand on it

bongodongobob an hour ago | parent [-]

He means that it is heavily biased to write code, not remove, condense, refactor, etc. It wants to generate more stuff, not less.

f311a 4 hours ago | parent | prev | next [-]

I like to ask LLMs to find problems o improvements in 1-2 files. They are pretty good at finding bugs, but for general code improvements, 50-60% edits are trash. They add completely unnecessary stuff. If you ask them to improve a pretty well-written code, they rarely say it's good enough already.

For example, in a functional-style codebase, they will try to rewrite everything to a class. I have to adjust the prompt to list things that I'm not interested in. And some inexperienced people are trying to write better code by learning from such changes of LLMs...

pawelduda 4 hours ago | parent | next [-]

If you just ask it to find problems, it will do its best to find them - like running a while loop with no return condition. That's why I put some breaker in the prompt, which in this case would be "don't make any improvements if the positive impact is marginal". I've mostly seen it do nothing and just summarize why, followed by some suggestions in case I still want to force the issue

f311a 4 hours ago | parent [-]

I guess "marginal impact" for them is a pretty random metric, which will be different on each run. Will try it next time.

Another problem is that they try to add handling of different cases that are never present in my data. I have to mention that there is no need to update handling to be more generalized. For example, my code handles PNG files, and they add JPG handling that never happens.

ryandrake 3 hours ago | parent | prev [-]

I asked Claude the other day to look at one of my hobby projects that has a client/server architecture and a bespoke network protocol, and brainstorm ideas for converting it over to HTTP, JSON-RPC, or something else standards-based. I specifically told it to "go wild" and really explore the space. It thought for a while and provided a decent number of suggestions (several I was unaware of) with "verdicts". Ultimately, though, it concluded that none of them were ideal, and that the custom wire protocol was fine and appropriate for the project. I was kind of shocked at this conclusion: I expected it to behave like that eager intern persona we all have come to expect--ready to rip up the code and "do things."

kderbyma 2 days ago | parent | prev | next [-]

Yeah. I noticed Claud suffers when it reaches context overload - its too opinionated, so it shortens its own context with decisions I would not ever make, yet I see it telling itself that the shortcuts are a good idea because the project is complex...then it gets into a loop where it second guesses its own decisions and forgets the context and then continues to spiral uncontrollably into deeper and deeper failures - often missing the obvious glitch and instead looking into imaginary land for answers - constantly diverting the solution from patching to completely rewriting...

I think it suffers from performance anxiety...

----

The only solution I have found is to - rewrite the prompt from scratch, change the context myself, and then clear any "history or memories" and then try again.

I have even gone so far as to open nested folders in separate windows to "lock in" scope better.

As soon as I see the agent say "Wait, that doesnt make sense, let me review the code again" its cooked

embedding-shape 4 hours ago | parent | next [-]

> Yeah. I noticed Claud suffers when it reaches context overload

All LLMs degrade in quality as soon as you go beyond one user message and one assistant response. If you're looking for accuracy and highest possible quality, you need to constantly redo the conversations from scratch, never go beyond one user message.

If the LLM gets it wrong in their first response, instead of saying "No, what I meant was...", you need to edit your first response, and re-generate, otherwise the conversation becomes "poisoned" almost immediately, and every token generated after that will suffer.

torginus 3 hours ago | parent [-]

Yeah, I used to write some fiction for myself with LLMs as a recreational pasttime, it's funny to see how as the story gets longer, LLMs progressively either get dumber, start repeating themselves, or become unhinged.

rtp4me 4 hours ago | parent | prev | next [-]

For me, too many compactions throughout the day eventually lead to a decline in Claude's thinking ability. And, during that time, I have given it so much context to help drive the coding interaction. Thus, restarting Claude requires me to remember the small bits of "nuggets" we discovered during the last session so I find myself repeating the same things every day (my server IP is: xxx, my client IP is: yyy, the code should live in directory: a/b/c). Using the resume feature with Claude simply brings back the same decline in thinking that led me to stop it in the first place. I am sure there is a better way to remember these nuggets between sessions but I have not found it yet.

mingus88 4 hours ago | parent [-]

Shouldn't you put those things you keep repeating into CLAUDE.md?

rtp4me 4 hours ago | parent [-]

Perhaps, but I already have a CLAUDE.md file for the general coding session. Unique items I stumble upon each day probably should go into another file that can be dynamically updated. Maybe I should create a /slash command for this?

Edit: Shortly after posting this, I asked Claude the same type of question (namely how to persist pieces of data between each coaching session). I just learned about Claude's Memory System - the ability to store these pieces of data between coding sessions. I learn something new every day!

someguyiguess 5 hours ago | parent | prev | next [-]

There’s definitely a certain point I reach when using Claude code where I have to make the specifications so specific that it becomes more work than just writing the code myself

snarf21 4 hours ago | parent | prev | next [-]

That has been my greatest stumbling block with these AI agents: context. I was trying to have one help vibe code a puzzle game and most of the time I added a new rule it broke 5 existing rules. It also never approached the rules engine with a context of building a reusable abstraction, just Hammer meet Nail.

SV_BubbleTime 5 hours ago | parent | prev | next [-]

I’m keeping Claude’s tasks small and focused, then if I can I clear between.

It’s REAL FUCKING TEMPTING to say ”hey Claude, go do this thing that would take me hours and you seconds” because he will happily, and it’ll kinda work. But one way or another you are going to put those hours in.

It’s like programming… is proof of work.

thevillagechief 5 hours ago | parent [-]

Yes, this is exactly true. You will put in those hours.

whatshisface 4 hours ago | parent [-]

In this vein, one of the biggest time-savers has turned out to be its ability to make me realize I don't want to do something.

SV_BubbleTime an hour ago | parent [-]

I get that. But I think the AI-deriders are a bit nuts sometimes because while I’m not running around crying about AGI… it’s really damn nice to change the arguments of a function and have it just go everywhere and adjust every invocation of that function to work properly. Something that might take me 10-30 minutes is now seconds and it’s not outside of its reliability spectrum.

Vibe coding though, super deceptive!

flowerthoughts 4 hours ago | parent | prev [-]

There's no -c on the command line, so I'm guessing this is starting fresh every iteration, unless claude(1) has changed the default lately.

iambateman 4 hours ago | parent | prev | next [-]

The point he’s making - that LLM’s aren’t ready for broadly unsupervised software development - is well made.

It still requires an exhausting amount of thought and energy to make the LLM go in the direction I want, which is to say in a direction which considers the code which is outside the current context window.

I suspect that we will not solve the context window problem for a long time. But we will see a tremendous growth in “on demand tooling” for things which do fit into a context window and for which we can let the AI “do whatever it wants.”

For me, my work product needs to conform to existing design standards and I can’t figure out how to get Claude to not just wire up its own button styles.

But it’s remarkable how—despite all of the nonsense—these tools remain an irreplaceable part of my work life.

spaceywilly 3 hours ago | parent | next [-]

I feel like I’ve figured out a good workflow with AI coding tools now. I use it in “Planning mode” to describe the feature or whatever I am working on and break it down into phases. I iterate on the planning doc until it matches what I want to build.

Then, I ask it to execute each phase from the doc one at a time. I review all the code it writes or sometimes just write it myself. When it is done it updates the plan with what was accomplished and what needs to be done next.

This has worked for me because:

- it forces the planning part to happen before coding. A lot of Claude’s “wtf” moments can be caught in this phase before it write a ton of gobbledygook code that I then have to clean up

- the code is written in small chunks, usually one or two functions at a time. It’s small enough that I can review all the code and understand before I click accept. There’s no blindly accepting junk code.

- the only context is the planning doc. Claude captures everything it needs there, and it’s able to pick right up from a new chat and keep working.

- it helps my distraction-prone brain make plans and keep track of what I was doing. Even without Claude writing any code, this alone is a huge productivity boost for me. It’s like have a magic notebook that keeps track of where I was in my projects so I can pick them up again easily.

torginus 3 hours ago | parent | prev [-]

Which is why I think agentic software development is not really worth it today. It can solve well-defined problems, and work through issues by rote, but to give it some task and have it work on it for a couple hours, then you have to come in and fix it up.

I think LLMs are still at the 'advanced autocomplete' stage, where the most productive way to use them is to have a human in the loop.

In this, accuracy of following instructions, and short feedback time is much more important than semi-decent behavior over long-horizon tasks.

thomassmith65 26 minutes ago | parent | prev | next [-]

With a good programmer, if they do multiple passes of a refactor, each pass makes the code more elegant, and the next pass easier to understand and further improve.

Claude has a bias to add lines of code to a project, rather than make it more concise. Consequently, each refactoring pass becomes more difficult to untangle, and harder to improve.

Ideally, in this experiment, only the first few passes would result in changes - mostly shrinking the project size, and from then on, Claude would change nothing - just a like a very good programmer.

This is the biggest problem with developing with Claude, by far. Anthropic should laser focus on fixing it.

mbesto 4 hours ago | parent | prev | next [-]

While there are justifiable comments here about how LLMs behave, I want to point out something else:

There is no consensus on what constitutes a high quality codebase.

Said differently - even if you asked 200 humans to do this same exercise, you would get 200 different outputs.

samuelknight 3 hours ago | parent | prev | next [-]

This is an interesting experiment that we can summarize as "I gave a smart model a bad objective", with the key result at the end

"...oh and the app still works, there's no new features, and just a few new bugs."

Nobody thinks that doing 200 improvement passes on functioning code base is a good idea. The prompt tells the model that it is a principal engineer, then contradicts that role the imperative "We need to improve the quality of this codebase". Determining when code needs to be improved is a responsibility for the principal engineer but the prompt doesn't tell the model that it can decide the code is good enough. I think we would see a different behavior if the prompt was changed to "Inspect the codebase, determine if we can do anything to improve code quality, then immediately implement it." If the model is smart enough, this will increasingly result in passes where the agent decides there is nothing left to do.

In my experience with CC I get great results where I make an open ended question about a large module and instruct it to come back to me with suggestions. Claude generates 5-10 suggestions and ranks them by impact. It's very low-effort from the developer's perspective and it can generate some good ideas.

jedberg 3 hours ago | parent | prev | next [-]

You know how when someone hears how many engineerings are working on a product, and you think to yourself, "but I could do that with like three people!"? Now you know why they have so many people. Because they did this with their codebase, but with humans.

Or I should say, they kept hiring the humans who needed something to do, and basically did what this AI did.

hazmazlaz 4 hours ago | parent | prev | next [-]

Well of course it produced bad results... it was given a bad prompt. Imagine how things would have turned out if you had given the same instructions to a skilled but naive contractor who contractually couldn't say no and couldn't question you. Probably pretty similar.

mainmailman 4 hours ago | parent [-]

Yeah I don't see the utility in doing this hundreds of times back to back. A few iterations can tell us some things about how Claude optimizes code, but an open ended prompt to endlessly "improve" the code sounds like a bad boss making huge demands. I don't blame the AI for adding BS down the line.

m101 5 hours ago | parent | prev | next [-]

This is a great example of there being no intelligence under the hood.

xixixao 4 hours ago | parent | next [-]

Would a human perform very differently? A human who must obey orders (like maybe they are paid to follow the prompt). With some "magnitude of work" enforced at each step.

I'm not sure there's much to learn here, besides it's kinda fun, since no real human was forced to suffer through this exercise on the implementor side.

wongarsu 4 hours ago | parent | next [-]

> A human who must obey orders (like maybe they are paid to follow the prompt). With some "magnitude of work" enforced at each step

Which describes a lot of outsourced development. And we all know how well that works

nosianu 4 hours ago | parent | prev | next [-]

> Would a human perform very differently?

How useful is the comparison with the worst human results? Which are often due to process rather than the people involved.

You can improve processes and teach the humans. The junior will become a senior, in time. If the processes and the company are bad, what's the point of using such a context to compare human and AI outputs? The context is too random and unpredictable. Even if you find out AI or some humans are better in such a bad context, what of it? The priority would be to improve the process first for best gains.

Capricorn2481 4 hours ago | parent | prev | next [-]

> Would a human perform very differently?

Yes.

ebonnafoux 4 hours ago | parent | prev | next [-]

I have seen some codebase doubling the number of LoC after "refactoring" made by humans, so I would say no.

thatwasunusual 4 hours ago | parent | prev [-]

No (human) developer would _add_ tests. ^/s

Terretta 4 hours ago | parent | prev | next [-]

Just as enterprise software is proof positive of no intelligence under the hood.

I don't mean the code producers, I mean the enterprise itself is not intelligent yet it (the enterprise) is described as developing the software. And it behaves exactly like this, right down to deeply enjoying inflicting bad development/software metrics (aka BD/SM) on itself, inevitably resulting in:

https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

SV_BubbleTime 5 hours ago | parent | prev [-]

Well… it’s more a great example that great output is a good model with the right context at the right time.

Take away everything else, there’s a product that is really good at small tasks, it doesn’t mean that changing those small tasks together to make a big task should work.

dcchuck 4 hours ago | parent | prev | next [-]

I spent some time last night "over iterating" on a plan to do some refactoring in a large codebase.

I created the original plan with a very specific ask - create an abstraction to remove some tight coupling. Small problem that had a big surface area. The planning/brainstorming was great and I like the plan we came up with.

I then tried to use a prompt like OP's to improve it (as I said, large surface area so I wanted to review it) - "Please review PLAN_DOC.md - is it a comprehensive plan for this project?". I'd run it -> get feedback -> give it back to Claude to improve the plan.

I (naively perhaps) expected this process to converge to a "perfect plan". At this point I think of it more like a probability tree where there's a chance of improving the plan, but a non-zero chance of getting off the rails. And once you go off the rails, you only veer further and further from the truth.

There are certainly problems where "throwing compute" at it and continuing to iterate with an LLM will work great. I would expect those to have firm success criteria. Providing definitions of quality would significantly improve the output here as well (or decrease the probability of going off the rails I suppose). Otherwise Claude will confuse quality like we see here.

Shout out OP for sharing their work and moving us forward.

Gricha 3 hours ago | parent | next [-]

I think I end up doing that with plans inadvertently too. Oftentimes I'll iterate on a plan too many times, and only recognize that it's too far gone and needs a restart with more direction after sinking in 15 minutes into it.

elzbardico 4 hours ago | parent | prev [-]

Small errors compound over time.

blobbers an hour ago | parent | prev | next [-]

I'm curious if anyone has written a "Principal Engineer" agents.md or CLAUDE.md style file that yields better results than the 'junior dev' results people are seeing here.

I've worked on writing some as a data scientist, and I have gotten the basic claude output to be much better; it makes some saner decisions, it validates and circles back to fix fits, etc.

ttul 2 hours ago | parent | prev | next [-]

Have you tried writing into the AGENTS.md something like, "Always be on the lookout for dead code, copy-pasta, and other opportunities to optimize and trim the codebase in a sensible way."

In my experience, adding this kind of instruction to the context window causes SOTA coding models to actually undertake that kind of optimization while development carries on. You can also periodically chuck your entire codebase into Gemini-3 (with its massive context window) and ask it to write a refactoring plan; then, pass that refactoring plan back into your day-to-day coding environment such as Cursor or Codex and get it to take a few turns working away at the plan.

As with human coders, if you let them run wild "improving" things without specifically instructing them to also pay attention to bloat, bloat is precisely what you will get.

torginus 4 hours ago | parent | prev | next [-]

I've heard a very apt criticism of the current batch of LLMs:

LLMs are incapable of reducing entropy in a code base

I've always had this nagging feeling, but I think this really captures the essence of it succintly.

maddmann 4 hours ago | parent | prev | next [-]

lol 5000 tests. Agentic code tools have a significant bias to add versus remove/condense. This leads to a lot of bloat and orphaned code. Definitely something that still needs to be solved for by agentic tools.

nosianu 4 hours ago | parent | next [-]

> Agentic code tools have a significant bias to add versus remove/condense.

Your point stands uncontested by me, but I just wanted to mention that humans have that bias too.

Random link (has the Nature study link): https://blog.benchsci.com/this-newly-proven-human-bias-cause...

https://en.wikipedia.org/wiki/Additive_bias

maddmann an hour ago | parent [-]

Great point, interesting how agents somehow pick up the same bias.

oofbey 4 hours ago | parent | prev [-]

Oh I’ve had agents remove tests plenty of times. Or cripple the tests so they pass but are useless - more common and harder to prompt against.

maddmann an hour ago | parent [-]

Ah true, that also can happen — in aggregate I think models will tend to expand codebases versus contract. Though, this is anecdotal and probably is something ai labs and coding agent companies are looking at now.

minimaxir 3 hours ago | parent | prev | next [-]

About a year ago I wrote a blog post (HN discussion: https://news.ycombinator.com/item?id=42584400) experimenting if asking Claude to "write code better" repeatedly would indeed cause it to write better code, determined by speed as better code implies more efficient algorithms. I found that it did indeed work (at n=5 iterations), but additionally providing a system prompt also explicitly improved it.

Given with what I've seen from Claude 4.5 Opus, I suspect the following test would be interesting: attempt to have Claude Code + Haiku/Sonnet/Opus implement and benchmark an algorithm with:

- no CLAUDE.md file

- a basic CLAUDE.md file

- an overly nuanced CLAUDE.md file

And then both test the algorithm speed and number of turns it takes to hit that algorithm speed.

bulletsvshumans 4 hours ago | parent | prev | next [-]

I think the prompt is a major source of the issue. "We need to improve the quality of this codebase" implicitly indicates that there is something wrong with the codebase. I would be curious to see if it would reach a point of convergence with a prompt that allowed for it. Something like "Improve the quality of this codebase, or tell me that it is already in an optimal state."

written-beyond 3 days ago | parent | prev | next [-]

> I like Rust's result-handling system, I don't think it works very well if you try to bring it to the entire ecosystem that already is standardized on error throwing.

I disagree, it's very useful even in languages that have exception throwing conventions. It's good enough for the return type for Promise.allSettled api.

The problem is when I don't have the result type I end up approximating it anyway through other ways. For a quick project I'd stick with exceptions but depending on my codebase I usually use the Go style ok, err tuple (it's usually clunkier in ts though) or a rust style result type ok err enum.

turboponyy 4 hours ago | parent [-]

I have the same disagreement. TypeScript with its structural and pseudo-dependent typing, somewhat-functionally disposed language primitives (e.g. first-class functions as values, currying) and standard library interfaces (filter, reduce, flatMap et al), and ecosystem make propagating information using values extremely ergonomic.

Embracing a functional style in TypeScript is probably the most productive I've felt in any mainstream programming language. It's a shame that the language was defiled with try/catch, classes and other unnecessary cruft so third party libraries are still an annoying boundary you have to worry about, but oh well.

The language is so well-suited for this that you can even model side effects as values, do away with try/catch, if/else and mutation a la Haskell, if you want[1].

[1] https://effect.website/

barbazoo 2 hours ago | parent | prev | next [-]

> I can sort of respect that the dependency list is pretty small, but at the cost of very unmaintainable 20k+ lines of utilities. I guess it really wanted to avoid supply-chain attacks.

> Some of them are really unnecessary and could be replaced with off the shelf solution

Lots of people would regard this as a good thing. Surely the LLM can't guess which kind you are.

tracker1 3 hours ago | parent | prev | next [-]

On the Result<TR, TE> responses... I've seen this a few times. I think it works well in Rust or other languages that don't have the ability to "throw" baked in. However, when you bolt it on to a language that implicitly can throw, you're now doing twice the work as you have to handle the explicit error result and integrated errors.

I worked in a C# codebase with Result responses all over the place, and it just really complicated every use case all around. Combined with Promises (TS) it's worse still.

mrsmrtss 2 hours ago | parent [-]

The Result pattern also works exceptionally well with C#, provided you ensure that code returning a Result object never throws an exception. Of course, there are still some exceptional things that can throw, but this is essentially the same situation as dealing with Rust panics.

Bombthecat an hour ago | parent | prev | next [-]

Story of AI:

For instance - it created a hasMinimalEntropy function meant to "detect obviously fake keys with low character variety". I don't know why.

bikeshaving 4 hours ago | parent | prev | next [-]

https://github.com/Gricha/macro-photo/blob/highest-quality/l...

The logger library which Claude created is actually pretty simple, highly approachable code, with utilities for logging the timings of async code and the ability to emit automatic performance warnings.

I have been using LogTape (https://logtape.org) for JavaScript logging, and the inherited, category-focused logging with different sinks has been pretty great.

surprisetalk 4 hours ago | parent | prev | next [-]

This reflects my experience with human programmers. So many devs are taught to add layers of complexity in pursuit of "best practices". I think the LLM was trained to behave this way.

In my experience, Claude can actually clean up a repo rather nicely if you ask it to (1) shrink source code size (LOC or total bytes), (2) reduce dependencies, and (3) maintain integration tests.

Hammershaft 4 hours ago | parent | prev | next [-]

Impressive that the app still works! Did not expect that.

elzbardico 4 hours ago | parent [-]

Probably being a very simple application and starting with an already big testing suite helped.

maerF0x0 3 hours ago | parent | prev | next [-]

I would love to see someone do a longitudinal study of the incident/error rate of a canary container in prod that is managed by claude. Basically doing a control/experimental group to prove who does better the Humans or the AI?

Havoc 3 hours ago | parent | prev | next [-]

My current fav improvement strategy is

1) Run multiple code analysis tools over it and have the LLM aggregate it with suggestions

2) ask the LLM to list potential improvements open ended question and pick by hand which I want

And usually repeat the process with a completely different model (ie diff company trained it)

Any more and yeah they end up going in circles

WhitneyLand 4 hours ago | parent | prev | next [-]

It can be difficult to explain to management why in certain scenarios AI can seem to work coding miracles, but this still doesn’t mean it’s going always speed up development 10x especially for an established code base.

Tangible examples like this seem like a useful way to show some of the limitations.

fauigerzigerk 3 hours ago | parent | prev | next [-]

What would happen if you gave the same task to 200 human contractors?

I suspect SLOC growth wouldn't be quite as dramatic but things like converting everything to Rust's error handling approach could easily happen.

websiteapi 5 hours ago | parent | prev | next [-]

you gotta be strategic about it. so for example for tests, tell it to use equivalence testing and to prove it, e.g. create a graph of permutations of arguments and their equivalences from the underlying code, and then use such thing to generate the tests.

telling it to do better without any feedback obviously is going to go nowhere fast.

elzbardico 4 hours ago | parent | prev | next [-]

Funniest part:

> ..oh and the app still works, there's no new features, and just a few new bugs.

orliesaurus 3 hours ago | parent | prev | next [-]

Ok SRS question: What's the best "Code Review" Skill/Agent/Prompt that I can use these days? Curious to see even paid options if anyone knows?

keepamovin 3 hours ago | parent | prev | next [-]

This is actually a great idea. It's like those AI resampled this image 10,000 times. Or JPEG iteratively compressed this picture 1 Million times.

gm678 4 hours ago | parent | prev | next [-]

"Core Functional Utilities: Identity function - returns its input unchanged." is one of my favorites from `lib/functional.ts`.

phildougherty 4 hours ago | parent | prev | next [-]

Pasting this whole article in to claude code "improve my codebase taking this article in to account"

minimaxir 3 hours ago | parent [-]

You can just give Claude Code/any modern Agent a URL and it'll retrieve it.

VikingCoder 3 hours ago | parent | prev | next [-]

You need to scroll the windows to see all the numbers. (Why??)

g947o 3 hours ago | parent | prev | next [-]

When I ask coding agents to add tests, they often come up with something like this:

    const x = new NewClass();
    assert.ok(x instanceof NewClass);
So I am not at all surprised about Claude adding 5x tests, most of which are useless.

It's going to be fun to look back at this and see how much slop these coding agents created.

thald 3 hours ago | parent | prev | next [-]

Interesting experiment. Looking at this I immediately thought similar experiment run by Google: AlphaEvolve. Throwing LLM compute at problems might work if the problem is well defined and the result can be objectively measured.

As for this experiment: What does quality even mean? Most human devs will have different opinions on it. If you would ask 200 different devs (Claude starts from 0 after each iteration) to do the same, I have doubts the code would look much better.

I am also wondering what would happen if Claude would have an option to just walk away from the code if its "good enough". For each problem most human devs run cost->benefit equation in their head, only worthy ideas are realized. Claude does not do it, the code writing cost is very low on his site and the prompt does not allow any graceful exit :)

simonw 4 hours ago | parent | prev | next [-]

The prompt was:

  Ultrathink. You're a principal engineer. Do not ask me any
  questions. We need to improve the quality of this codebase.
  Implement improvements to codebase quality.
I'm a little disappointed that Claude didn't eventually decide to start removing all of the cruft it had added to improve the quality that way instead.
Gricha 3 hours ago | parent [-]

Yeah, the best it did on some iterations is claimed that the codebase was already in the good state and didn't produce changes - but that was 1 in many.

pawelduda 4 hours ago | parent | prev | next [-]

Did it create 200 CODE_QUALITY_IMPROVEMENTS.md files by chance?

GuB-42 3 hours ago | parent | prev | next [-]

It is something I noticed when talking to LLMs, if they don't get it right the first time, they probably never will, and if you really insist, the quality starts to degrade.

It is not unlike people, the difference being that if you ask someone the same thing 200 times, he will probably going to tell you to go fuck yourself, or, if unable to, turn to malicious compliance. These AIs will always be diligent. Or, a human may use the opportunity to educate himself, but again, LLMs don't learn by doing, they have a distinct training phase that involves ingesting pretty much everything humanity has produced, your little conversation will not have a significant effect, if at all.

grvdrm 3 hours ago | parent [-]

I use a new chat/etc every time that happens. Try to improve my prompt to get a better result. Sometimes works, but that multiple chat rather than laborious long chat approach annoys me less.

6LLvveMx2koXfwn 4 hours ago | parent | prev | next [-]

for all the bad code havoc was most certainly not 'wrecked', it may have been 'wreaked' though . . .

mvanbaak 3 hours ago | parent | prev | next [-]

`--dangerously-skip-permissions` why?

minimaxir 3 hours ago | parent [-]

It's necessary to allow Claude Code to be fully autonomous, otherwise it will stop and ask you to run commands.

mvanbaak 3 hours ago | parent [-]

and just letting it to do whatever it thinks it should do, without a human intervening, is a good plan?

ssl-3 3 hours ago | parent | next [-]

Depending on the breadth (and value) of the sandbox: Sure? Why not?

To extend what may seem like a [prima facie] insane, stupid, or foolhardy idea: Why not send the output of /dev/urandom into /bin/bash? Or even /proc/mem? It probably won't do anything particularly interesting. It will probably just break things and burn power.

And so? It's just a computer; its scope is limited.

minimaxir 3 hours ago | parent | prev | next [-]

Discovering that is the entire intent of this experiment, yes.

mvanbaak 3 hours ago | parent [-]

fair point. will re-read the whole thing. I'm sorry for my ignorance.

news_hacker 3 hours ago | parent | prev [-]

the "best practice" suggestion would be to do this in a sandboxed container

jesse__ 3 hours ago | parent | prev | next [-]

> This app is around 4-5 screens. The version "pre improving quality" was already pretty large. We are talking around 20k lines of TS

Fucking yikes dude. When's the last time it took you 4500 lines per screen, 9000 including the JSON data in the repo????? This is already absolute insanity.

I bet I could do this entire app in easily less than half, probably less than a tenth, of that.

SKILNER 4 hours ago | parent | prev | next [-]

This strikes me as a very solid methodology for improving the results of all AI coding tools. I hope Anthropic, etc take this up.

Rather than converging on optimal code (Occam's Razor for both maintainability and performance) they are just spewing code all over the scene. I've noticed that myself, of course, but this technique helps to magnify and highlight the problem areas.

It makes you wonder how much training material was/is available for code optimization relative to training material for just coding to meet functional requirements. And therefore, what's the relative weight of optimizing code baked into the LLMs.

etamponi 4 hours ago | parent | prev | next [-]

Am I the only one that is surprised that the app still works?!

stavros 4 hours ago | parent | prev | next [-]

Well, given it can't say "no, I think it's good enough now", you'll just get madness, no?

minimaxir 3 hours ago | parent [-]

That's the point. Sometimes madness is interesting.

smallpipe 2 hours ago | parent | prev | next [-]

The viewport of this website is quite infuriating. I have to scroll horizontally to see the `cloc` output, but there's 3x the empty space on either side.

lubesGordi 2 hours ago | parent | prev | next [-]

So now you know. You can get claude to write you a ton of unit tests and also improve your static typing situation. Now you can restrict your prompt!

jcalvinowens 2 hours ago | parent | prev | next [-]

This really mirrors my experience trying to get LLMs to clean up kernel driver code, they seem utterly incapable of simplifying things.

nadis 2 hours ago | parent | prev | next [-]

20K --> 84K lines of ts for a simple app is bananas. Much madness indeed! But also super interesting, thanks for sharing the experiment.

guluarte 4 hours ago | parent | prev | next [-]

that's my experience with AI, most times it creates an overengineered solution unless told it to keep it simple

krupan 4 hours ago | parent | prev [-]

Just the headline sounds like a YouTube brain rot video title:

"I spent 200 days in the woods"

"I Google translated this 200 times"

"I hit myself with this golf club 200 times"

Is this really what hacker news is for now?

havkom 4 hours ago | parent | next [-]

There are fundamental differences. Many people expect a positive gradient of quality from AI overhaul of projects. For translating back and forth, it is obvious from the outset that there is a negative gradient of quality (the Chinese whispers game).

jmkni 4 hours ago | parent | prev [-]

If you reverse the order this could be a very interesting Youtube series