| > Or just don't use AI to write code. Anecdata, but I'm still finding CC to be absolutely outstanding at writing code. It's regularly writing systems-level code that would take me months to write by hand in hours, with minimal babysitting, basically no "specs" - just giving it coherent sane direction: like to make sure it tests things in several different ways, for several different cases, including performance, comparing directly to similar implementations (and constantly triple-checking that it actually did what you asked after it said "done"). For $200/mo, I can still run 2-3 clients almost 24/7 pumping out features. I rarely clear my session. I haven't noticed quality declines. Though, I will say, one random day - I'm not sure if it was dumb luck - or if I was in a test group, CC was literally doing 10x the amount of work / speed that it typically does. I guess strange things are bound to happen if you use it enough? Related anecdata: IME, there has been a MASSIVE decline in the quality of claude.ai (the chatbot interface). It is so different recently. It feels like a wanna-be crapier version of ChatGPT, instead of what it used to be, which was something that tried to be factual and useful rather than conversational and addictive and sycophantic. |
| |
| ▲ | mlinsey 5 hours ago | parent | next [-] | | My anecdata is that it heavily depends on how much of the relevant code and instructions it can fit in the context window. A small app, or a task that touches one clear smaller subsection of a larger codebase, or a refactor that applies the same pattern independently to many different spots in a large codebase - the coding agents do extremely well, better than the median engineer I think. Basically "do something really hard on this one section of code, whose contract of how it intereacts with other code is clear, documented, and respected" is an ideal case for these tools. As soon as the codebase is large and there are gotchas, edge cases where one area of the code affects the other, or old requirements - things get treacherous. It will forget something was implemented somewhere else and write a duplicate version, it will hallucinate what the API shapes are, it will assume how a data field is used downstream based on its name and write something incorrect. IMO you can still work around this and move net-faster, especially with good test coverage, but you certainly have to pay attention. Larger codebases also work better when you started them with CC from the beginning, because it's older code is more likely to actually work how it exepects/hallucinates. | | |
| ▲ | onlyrealcuzzo 5 hours ago | parent [-] | | > My anecdata is that it heavily depends on how much of the relevant code and instructions it can fit in the context window. Agreed, but I'm working on something >100k lines of code total (a new language and a runtime). It helps when you can implement new things as if they're green-field-ish AND THEN implement and plumb them later. |
| |
| ▲ | janalsncm 4 hours ago | parent | prev | next [-] | | How can a person reconcile this comment with the one at the root of this thread? One person says Claude struggles to even meet the strict requirements of a spec sheet, another says Claude is doing a great job and doesn’t even need specific specs? I have my own anecdata but my comment is more about the dissonance here. | | |
| ▲ | sarchertech 3 hours ago | parent [-] | | One person is rigorously checking to see if Claude is actually following the spec and one person isn’t? | | |
| ▲ | flyinglizard 3 hours ago | parent | next [-] | | ... or one person has a very strong mental model of what he expects to do, but the LLM has other ideas. FWIW I'm very happy with CC and Opus, but I don't treat it as a subordinate but as a peer; I leave it enough room to express what it thinks is best and guide later as needed. This may not work for all cases. | | |
| ▲ | sarchertech 2 hours ago | parent [-] | | If you don’t have a very strong mental model for what you are working on Claude can very easily guide in you into building the wrong thing. For example I’m working on a huge data migration right now. The data has to be migrated correctly. If there are any issues I want to fail fast and loud. Claude hates that philosophy. No matter how many different ways I add my reasons and instructions to stop it to the context, it will constantly push me towards removing crashes and replacing them with “graceful error handling”. If I didn’t have a strong idea about what I wanted, I would have let it talk me into building the wrong thing. Claude has no taste and its opinions are mostly those of the most prolific bloggers. Treating Claude like a peer is a terrible idea unless you are very inexperienced. And even then I don’t know if that’s a good idea. | | |
| ▲ | oops an hour ago | parent [-] | | That’s interesting to hear as for me Claude has been quite good about writing code that fails fast and loud and has specifically called it out more than once. It has also called out code that does not fail early in reviews. |
|
| |
| ▲ | hunterpayne 3 hours ago | parent | prev [-] | | [flagged] | | |
| ▲ | riquito 2 hours ago | parent [-] | | Then you should expect any positive comment to be replied negatively by a competition's puppet or bot too |
|
|
| |
| ▲ | ghurtado 5 hours ago | parent | prev | next [-] | | > basically no "specs" - just giving it coherent sane direction This is one variable I almost always see in this discussion: the more strict the rules that you give the LLM, the more likely it is to deeply disappoint you The earlier in the process you use it (ie: scaffolding) the more mileage you will get out of it It's about accepting fallability and working with it, rather than trying to polish it away with care | | |
| ▲ | phatskat 5 hours ago | parent [-] | | To me this still feels like it would be a net negative. I can scaffold most any project with a language/stack specific CLI command or even just checking out a repo. And sure, AI could “scaffold” further into controllers and views and maybe even some models, and they probably work ok. It’s then when they don’t, or when I need something tweaked, that the worry becomes “do I really understand what’s going on under the hood? Is the time to understand that worth it? Am I going to run across a small thread that I end up pulling until my 80% done sweater is 95% loose yarn?” To me the trade-off hasn’t proven worth it yet. Maybe for a personal pet project, and even then I don’t like the idea of letting something else undeterministically touch my system. “But use a VM!” they say, but that’s more overhead than I care for. Just researching the safest way to bootstrap this feels like more effort than value to me. Lastly, I think that a big part of why I like programming is that I like the act of writing code, understanding how it works, and building something I _know_. | | |
| ▲ | michaelmrose 11 minutes ago | parent [-] | | A lot of the benefit of scaffolding is building basic context which you can also build by feeding it the files produced by whatever CLI tool and talk about it forcing it to think for lack of a better word about your design. You can also force feed it design and api documentation. If you think that you have given it too much you are almost certainly wrong. Doing nonsensical things with a library feed it the documentation still busted make it read the source |
|
| |
| ▲ | prmph 4 hours ago | parent | prev | next [-] | | But, how do you know the code is good? If you do spot checks, that is woefully inadequate. I have lost count of the number of times when, poring over code a SOTA LLM has produced, I notice a lot of subtle but major issues (and many glaring ones as well), issues a cursory look is unlikely to pick up on. And if you are spending more time going over the code, how is that a massive speed improvement like you make it seem? And, what do you even mean by 10x the amount of work? I keep saying anybody that starts to spout these sort of anecdotes absolutely does NOT understand real world production level serious software engineering. Is the model doing 10x the amount of simplification, refactoring, and code pruning an effective senior level software engineer and architect would do? Is it doing 10x the detailed and agonizing architectural (re)work that a strong developer with honed architectural instincts would do? And if you tell me it's all about accepting the LLM being in the driver's seat and embracing vibe coding, it absolutely does NOT work for anything exceeding a moderate level of complexity. I used to try that several times. Up to now no model is able to write a simple markdown viewer with certain specific features I have wanted for a long time. I really doubt the stories people tell about creating whole compilers with vide coding. If all you see is and appreciate that it is pumping out 10x features, 10x more code, you are missing the whole point. In my experience you are actually producing a ton of sh*t, sorry. | | |
| ▲ | hirvi74 4 hours ago | parent [-] | | > But, how do you know the code is good? Honestly, this more of a question about scope of the application and the potential threat vectors. If the GP is creating software that will never leave their machine(s) and is for personal usage only, I'd argue the code quality likely doesn't matter. If it's some enterprise production software that hundreds to millions of users depend on, software that manages sensitive data, etc., then I would argue code quality should asymptotically approach perfection. However, I have many moons of programming under my belt. I would honestly say that I am not sure what good code even is. Good to who? Good for what? Good how? I truly believe that most competent developers (however one defines competent) would be utterly appalled at the quality of the human-written code on some of the services they frequently use. I apply the Herbie Hancock philosophy when defining good code. When once asked what is Jazz music, Herbie responded with, "I can't describe it in words, but I know it when I hear it." | | |
| ▲ | sarchertech 3 hours ago | parent [-] | | > I apply the Herbie Hancock philosophy when defining good code. When once asked what is Jazz music, Herbie responded with, "I can't describe it in words, but I know it when I hear it." That’s the problem. If we had an objective measure of good code, we could just use that instead of code reviews, style guides, and all the other things we do to maintain code quality. > I truly believe that most competent developers (however one defines competent) would be utterly appalled at the quality of the human-written code on some of the services they frequently use. Not if you have more than a few years of experience. But what your point is missing is the reason that software keeps working in the fist, or stays in a good enough state that development doesn’t grind to a halt. There are people working on those code bases who are constantly at war with the crappy code. At every place I’ve worked over my career, there have been people quietly and not so quietly chipping away at the horrors. My concern is that with AI those people will be overwhelmed. They can use AI too, but in my experience, the tactical tornadoes get more of a speed boost than the people who care about maintainability. |
|
| |
| ▲ | kobe_bryant 4 hours ago | parent | prev | next [-] | | months you say? how incredible. it beggars belief in fact | |
| ▲ | hirvi74 4 hours ago | parent | prev [-] | | Not sure about ChatGPT, but Claude was (is still?) an absolute ripper at cracking some software if one has even a little bit of experience/low level knowledge. At least, that's what my friend told me... I would personally never ever violate any software ToA. |
|
| > the whole thing being built on copyright infringement I am not a lawyer, but am generally familiar with two "is it fair use" tests. 1. Is it transformative? I take a picture, I own the copyright. You can't sell it. But if you take a copy, and literally chop it to pieces, reforming it into a collage, you can sell that. 2. Does the alleged infringing work devalue the original? If I have a conversation with ai about "The Lord of the Rings". Even if it reproduces good chunks of the original, it does not devalue the original... in fact, I would argue, it enhances it. Have I failed to take into account additional arguments and/or scenarios? Probably. But, in my opinion, AI passes these tests. AI output is transformative, and in general, does not devalue the original. |
| |
| ▲ | taikahessu 5 hours ago | parent | next [-] | | In order for LLM to be useful, you need to copy and steal all of the work. Yes, you can argue you don't need the whole work, but that's what they took and feed it in. And they are making money off of other people's work. Sure, you can use mental jiujutsu to make it fair use. But fair use for LLMs means you basically copy the whole thing. All of it. It sounds more like a total use to me. I hope the free market and technology catches up and destroys the VC backed machinery. But only time will tell. | | |
| ▲ | ragequittah 4 hours ago | parent [-] | | I always wonder if anyone out there thinks they're not making money off of other people's work. If you're coding, writing a fantasy novel, taking a photograph or drawing a picture from first principals you came up with yourself I applaud you though. | | |
| ▲ | taikahessu 4 hours ago | parent [-] | | You are absolutely right. Seriously though, I do think that is the case. It would be self-righteous to argue otherwise. It's just the scale and the nature of this, that makes it so repulsive. For my taste, copying something without permission, is stealing. I don't care what a judge somewhere thinks of it. Using someone's good will for profit is disgusting. And I hope we all get to profit from it someday, not just a select few. But that is just my opinion. | | |
| ▲ | IcyWindows an hour ago | parent [-] | | This kind of thinking seems like a road for people to have to pay a license for the rest of their life after going to school for the knowledge they "stole" from their textbooks. |
|
|
| |
| ▲ | jjwiseman 5 hours ago | parent | prev | next [-] | | And in Bartz v. Anthropic, the court found that Anthropic training their LLMs on books was "highly transformative." | |
| ▲ | Madmallard 2 hours ago | parent | prev | next [-] | | What in the mental gymnastics? They just stole everyone's hard work over decades to make this or it wouldn't have been useful at all. | | |
| ▲ | NewsaHackO 3 minutes ago | parent [-] | | That's a statement. The comment you are replying to had actual reasoning behind his claim. Do you have any actual reasoning behind yours? |
| |
| ▲ | idiotsecant 4 hours ago | parent | prev [-] | | This is a tiresome and well trod road. The fact of the matter is that for profit corporations consumed the sum knowledge of mankind with the intent to make money on it by encoding it into a larger and better organized corpus of knowledge. They cited no sources and paid no fees (to any regular humans, at least). They are making enormous sums of money (and burning even more, ironically) doing this. If that doesn't violate copyright, it violates some basic principle of decency. | | |
| ▲ | michaelmrose 8 minutes ago | parent [-] | | You are assuming intellectual property has intrinsic basis when it's at best functional not foundational. It's only useful if the net value to society is positive which is extremely dubious. |
|
|