| ▲ | pydry 3 days ago |
| >AI changes the job to be a constant struggle with hard problems I find this hilarious. From what I've seen watching people do it, it changes the job from deep thought and figuring out a good design to pulling a lever on a slot machine and hoping something good pops out. The studies that show diminished critical thinking have matched what i saw anecdotally pairing with people who vibe coded. It replaced deep critical thinking with a kind of faith based gambler's mentality ("maybe if i tell it to think really hard it'll do it right next time..."). The only times ive seen a notable productivity improvement is when it was something not novel that didnt particularly matter if what popped out was shit - e.g. a proof of concept, ad hoc app, something that would naturally either work or fail obviously, etc. The buzz people get from these gamblers' highs when it works seems to make them happier than if they didnt use it at all though. |
|
| ▲ | bdcravens 3 days ago | parent | next [-] |
| Which was my original point. Not that the outcome is shit. So much of what we write is absolutely low-skill and low-impact, but necessary and labor-intensive. Most of it is so basic and boilerplate you really can't look at it and know if it was machine- or human-generated. Why shouldn't that work get cranked out in seconds instead of hours? Then we can do the actual work we're paid to do. To pair this with the comment you're responding to, the decline in critical thinking is probably a sign that there's many who aren't as senior as their paycheck suggests. AI will likely lead to us being able to differentiate between who the architects/artisans are, and who the assembly line workers are. Like I said, that's not a new problem, it's just that AI lays that truth bare. That will have an effect generation over generation, but that's been the story of progress in pretty much every industry for time eternal. |
| |
| ▲ | skydhash 3 days ago | parent | next [-] | | > So much of what we write is absolutely low-skill and low-impact, but necessary and labor-intensive. Most of it is so basic and boilerplate you really can't look at it and know if it was machine- or human-generated. Is it really? Or is it a refusal to do actual software engineering, letting the machine taking care of it (deterministically) and moving up the ladder in terms of abstraction. I've seen people describing things as sludge, but they've never learned awk to write a simple script to take care of the work. Or learned how to use their editor, instead using the same pattern they would have with Notepad. I think it's better to take a step back and reflect on why we're spending time on basic stuff instead. Instead of praying that the LLM will generate some good basic stuff. | | |
| ▲ | bdcravens 3 days ago | parent [-] | | If you're not able to review what it generates, you shouldn't be using it (and arguably are the wrong person to be doing the boilerplate work to begin with) Put differently, I go back to my original comment, where AI is essentially a junior/mid dev that you can express what needs to be done with enough detail. In either case, AI or dev, you'd review and/or verify it. > Or is it a refusal to do actual software engineering, letting the machine taking care of it (deterministically) and moving up the ladder in terms of abstraction. One could say the same of installing packages in most modern programming languages instead of writing the code from first principles. | | |
| ▲ | layer8 3 days ago | parent [-] | | > One could say the same of installing packages in most modern programming languages instead of writing the code from first principles. I disagree, because libraries define an interface with (ideally) precise, reproducible semantics, that you make use of. They provide exactly what the grandparent is saying, namely a formal abstraction. When you have the choice between a library and an LLM, requiring equal effort, the library is clearly preferable. When an LLM is more time-efficient at a given coding task, it can be taken as an indication of a lack of a suitable library, tooling, or other abstraction for the use case. |
|
| |
| ▲ | pydry 3 days ago | parent | prev [-] | | >So much of what we write is absolutely low-skill and low-impact, but necessary and labor-intensive. Ive never found this to be true once in my career. I know a lot of devs who looked down on CRUD or whatever it was they were doing and produced boilerplate slop though. Code isnt like pottery. There arent "artisanal" devs who produce lovely code for people to look at like it's a beautiful vase. Good code that is hooked into the right product-market fit can reach millions of people if it works well. The world was replete with shitty code before AI and mostly it either got tossed away or it incurred epic and unnecessary maintenance costs because it actually did something useful. Nothing has changed on that front except the tsunami of shit got bigger. |
|
|
| ▲ | lukaslalinsky 3 days ago | parent | prev [-] |
| I think there are two kinds of uses for these tools: 1) you try to explain what you want to get done 2) you try to explain what you want to get done and how to get it done The first one is gambling, the second one has very small failure rate, at worst, the plan it presents shows it's not getting the solution you want it to do. |
| |
| ▲ | CuriouslyC 3 days ago | parent [-] | | The thing is to understand that a model has "priors" which steer how it generates code. If what you're trying to build matches the priors of the model you can basically surf the gradients to working software with no steering using declarative language. If what you want to build isn't well encoded by the models priors it'll constantly drift, and you need to use shorter prompts and specify the how more (imperative). | | |
| ▲ | lukaslalinsky 3 days ago | parent [-] | | In my experience, you need shorter prompts and steering it constantly for any kind of work, novel or not. You can be doing the most basic thing, if you let it iterate long enough, it will start doing something completely stupid. It's really sad watching Gemini CLI debugging something trivial, and trying to change it "one last time" again. Fortunately, Claude is better at this, but you still need to steer it. And try extremely hard not to give it pointers to something you DON'T want it to do, especially if it's easy and in it's training set, because it will so gladly pick it up. |
|
|