| ▲ | rudedogg 15 hours ago |
| I think the real divide is over quality and standards. We all have different thresholds for what is acceptable, and our roles as engineers typically reflect that preference. I can grind on a single piece of code for hours, iterating over and over until I like the way it works, the parameter names, etc. Other people do not see the value in that whatsoever, and something that works is good enough. We both are valuable in different ways. Also, theres the pace of advancement of the models. Many people formed their opinions last year, and the landscape has changed a lot. There’s also some effort requires in honing your skill using them. The “default” output is average quality, but with some coaxing higher quality output is easily attained. I’m happy people are skeptical though, there are a lot of things that do require deep thought, connecting ideas in new ways, etc., and LLMs aren’t good at that in my experience. |
|
| ▲ | allenu 14 hours ago | parent | next [-] |
| > I think the real divide is over quality and standards. I think there are multiple dimensions that people fall on regarding the issue and it's leading to a divide based on where everyone falls on those dimensions. Quality and standards are probably in there but I think risk-tolerance/aversion could be behind some how you look at quality and standards. If you're high on risk-taking, you might be more likely to forego verifying all LLM-generated code, whereas if you're very risk-averse, you're going to want to go over every line of code to make sure it works just right for fear of anything blowing up. Desire for control is probably related, too. If you desire more control in how something is achieved, you probably aren't going to like a machine doing a lot of the thinking for you. |
| |
| ▲ | bandrami 7 hours ago | parent | next [-] | | This. My aversion to LLMs is much more that I have low risk tolerance and the tails of the distribution are not well-known at this point. I'm more than happy to let others step on the land mines for me and see if there's better understanding in a year or two. | | |
| ▲ | XenophileJKO 6 hours ago | parent [-] | | I think there is more to it than that. I am a high quality/craftsmanship person. I like coding and puzzling. I am highly skilled in functional leaning object oriented deconstruction and systems design. I'm also pretty risk averse. I also have always believed that you should always be "sharpening your axe". For things like Java delelopment or things where I couldn't use a concise syntax would make extensive use of dynamic templating in my IDE. Want a builder pattern, bam, auto-generated. Now when LLMs came out they really took this to another level. I'm still working on the problems.. even when I'm not writing the lines of code. I'm decomposing the problems.. I'm looking at (or now debating with the AI) what is the best algorithm for something. It is incredibly powerful.. and I still care about the structure.. I still care about the "flow" of the code.. how the seams line up. I still care about how extensible and flexible it is for extension (based on where I think the business or problem is going). At the same time.. I definately can tell you, I don't like migrating projects from Tensorflow v.X to Tenserflow v.Y. | | |
| ▲ | skydhash 3 hours ago | parent [-] | | > I'm looking at (or now debating with the AI) what is the best algorithm for something. That line always makes me laugh. There’s only 2 points of an algorithm, domain correctness and technical performance. For the first, you need to step out of the code. And for the second you need proofs. Not sure what is there to debate about. |
|
| |
| ▲ | aleph_minus_one an hour ago | parent | prev [-] | | I think it's a little bit more complicated. I, for example, would claim to be rather risk-tolerant, but I (typically) don't like AI-generated code. The solution to the paradox this creates if one considers the model of your post is simple: - I deeply love highly elegant code, which the AI models do not generate. - I cannot stand people (and AIs) bullshitting me; this makes me furious. I thus have an insanely low tolerance for conmen (and conwomen and conAIs). |
|
|
| ▲ | bigstrat2003 8 hours ago | parent | prev | next [-] |
| > Also, theres the pace of advancement of the models. Many people formed their opinions last year, and the landscape has changed a lot. People have been saying this every year for the last 3 years. It hasn't been true before, and it isn't true now. The models haven't actually gotten smarter, they still don't actually understand a thing, and they still routinely make basic syntax and logic errors. Yes, even (insert your model of choice here). The truth is that there just isn't any juice to squeeze in this tech. There are a lot of people eagerly trying to get on board the hype train, but the tech doesn't work and there's no sign in sight that it ever will. |
| |
| ▲ | cableshaft 7 hours ago | parent | next [-] | | All I know is it feels very different using it now then it did a year ago. I was struggling to get it to do anything too useful a year ago, just asking it to do a small function here or there, often not being totally satisfied with the results. Now I can ask an agent to code a full feature and it has been handling it more often than not, often getting almost all of the way there with just a few paragraphs of description. | |
| ▲ | domlebo70 7 hours ago | parent | prev | next [-] | | Maybe I'm solving different problems to you, but I don't think I've seen a single "idiot moment" from Claude Code this entire week. I've had to massage things to get them more aligned with how I want things, but I don't recall any basic syntax or logic errors. | | |
| ▲ | coffeebeqn 2 hours ago | parent | next [-] | | With the better harness in Claude code and the >4.5 model and a somewhat thought out workflow we’ve definitely arrived at a point where I find it very helpful. The less you can rely on one-shot and more give meaningful context and a well defined testable goal the better it is. It honestly does make me worry how much better can it get and will some percentage of devs become obsolete. It requires less hand holding than many people I’ve worked with and the results come out 100x faster | |
| ▲ | smackeyacky 5 hours ago | parent | prev [-] | | I saw a few (Claude Sonnet 4.6), easily fixed. The biggest difference I’ve noticed is that when you say it has screwed up it much less likely to go down a hallucination path and can be dragged back. Having said that, I’ve changed the way I work too: more focused chunks of work with tight descriptions and sample data and it’s like having a 2nd brain. | | |
| |
| ▲ | swader999 4 hours ago | parent | prev [-] | | And yet I just eliminated three months (easily) of tech debt on our billing system in the past two weeks. |
|
|
| ▲ | enraged_camel 14 hours ago | parent | prev [-] |
| I think this is a false dichotomy because which approach is acceptable depends heavily on context, and good engineers recognize this and are capable of adapting. Sometimes you need something to be extremely robust and fool-proof, and iterating for hours/days/weeks and even months might make sense. Things that are related to security or money are good examples. Other times, it's much more preferable to put something in front of users that works so that they start getting value from it quickly and provide feedback that can inform the iterative improvements. And sometimes you don't need to iterate at all. Good enough is good enough. Ship it and forget about it. I don't buy that AI users favor any particular approach. You can use AI to ship fast, or you can use it to test, critique, refactor and optimize your code to hell and back until it meets the required quality and standards. |
| |
| ▲ | kaffekaka 8 hours ago | parent [-] | | Yes, it is a false dichotomy but describes a useful spectrum. People fall on different parts of the spectrum and it varies between situations and over time as well. It can remind one that it is normal to feel different from other people and different from what one felt yesterday. |
|