| ▲ | donatj 2 days ago |
| My friends job of late has basically become reviewing AI-generated slop his non-technical boss is generating that mostly seems to work and proving why it's not production-ready. Last week he was telling me about a PR he'd received. It should have been a simple additional CRUD endpoint, but instead it was a 2,000+ loc rats nest adding hooks that manually manipulated their cache system to make it appear to be working without actually working. He spent most of his day explaining why this shouldn't be merged. More and more I think Brandolini's law applies directly to AI-generated code > The amount of [mental] energy needed to refute ~bullshit~ [AI slop] is an order of magnitude bigger than that needed to produce it. |
|
| ▲ | givemeethekeys 2 days ago | parent | next [-] |
| The nephew has no programming knowledge. He wants to build a website that will turn him into a bazillionaire. He asks AI how to solve problem X. AI provides direction, but he doesn't quite know how to ask the right questions. Still, the AI manages to give him a 70% solution. He will go to his grave before he learns enough programming to do the remaining 30% himself, or, understand the first 70%. Delegating to AI isn't the same as delegating to a human. If you mistrust the human, you can find another one. If you mistrust the AI, there aren't many others to turn to, and each comes with an uncomfortable learning curve. |
| |
| ▲ | zarmin 2 days ago | parent | next [-] | | In the early aughts, I was so adept at navigating my town because I delivered pizza. I could draw a map from memory. My directional skills were A+. Once GPS became ubiquitous, I started relying on it, and over about a decade, my navigational skills degraded to the point of embarrassment. I've lived in the same major city now for 5 years and I still need a GPS to go everywhere. This is happening to many people now, where LLMs are replacing our thinking. My dad thinks he is writing his own memoirs. Yeah pop, weird how you and everyone else just started using the "X isn't Y, it's Z" trope liberally in your writing out of nowhere. It's definitely scary. And it's definitely sinister. I maintain that this is intentional, and the system is working the way they want it to. | | |
| ▲ | legostormtroopr 2 days ago | parent [-] | | I’m going to defend your dad here. AI slop at work is an absolute waste of time and effort. But your dad is trying to write his story, probably because he wants to leave something behind so he’s not forgotten. It might be cliche-riddled but AI is helping him write his experiences in a form he’s happy with and it’s still his story even if he got help. He’s also probably only writing it for an audience of one - you. So don’t shit on it, read it. Or you might regret it. | | |
| ▲ | Zagreus2142 2 days ago | parent | next [-] | | I get what you are saying, and a situation like this needs to be treated with extreme tact and care. But no, it's not his story, it's a low res approximation of his story as viewed through the lens of the stastical average reddit comment or self published book. If the father is really into the tech side of it (as opposed to pure laziness), I'd ask him for the prompts alongside the generated text and just ignore the output. The prompts are the writing that is meant for the original commentor, and it is well worth it to take the tact of not judging those by their writing quality independently. | |
| ▲ | g_sch 2 days ago | parent | prev [-] | | I sympathize with people who find writing difficult. But, putting myself in GP's shoes, I can't imagine trying to read my father's LLM-generated memoir. How could I possibly understand it as _his_ work? I would be sure that he gave the LLM some amount of information that would render it technically unique, but there's no way I could hear his voice in words that he didn't choose. If you're writing something for an audience of one, literally nothing matters more than the connection between you and the reader. As someone with a father who's getting on in years, even imagining this scenario is pretty depressing. |
|
| |
| ▲ | averageRoyalty 2 days ago | parent | prev | next [-] | | Who is "the nephew"? | | |
| ▲ | shash 2 days ago | parent | next [-] | | Metaphorical - the story of the professional who had to make way for the boss’s nephew who took a php course last week… | |
| ▲ | givemeethekeys 15 hours ago | parent | prev [-] | | My nephew! :) |
| |
| ▲ | lazide 2 days ago | parent | prev | next [-] | | More precisely, each ‘AI’ is just a statistical grouping of a large subset of other (generally randomly) selected humans. You don’t even get the same ‘human’ with the same AI, as you can see with various prompting. It’s like doing a lossy compression of an image, and then wondering why the color of a specific pixel isn’t quite right! | |
| ▲ | doublerabbit 2 days ago | parent | prev [-] | | Understand the first 70%. With the 70% you then pitch "I have this" and some Corp/VC will buyout the remaining 30%. They then in return hire engineers who are willing to lap the 70% slop, and fix the rest with more AI slop. Your brother dies happily achieving their dream of being a bazillionaire doing nothing more than typing a few sentences in a search bar. |
|
|
| ▲ | matheusmoreira 2 days ago | parent | prev | next [-] |
| > He spent most of his day explaining why this shouldn't be merged. "Explain to me in detail exactly how and why this works, or I'm not merging." This should suffice as a response to any code the developer did not actively think about before submitting, AI generated or not. |
| |
| ▲ | latexr 2 days ago | parent | next [-] | | I think you might’ve missed this part from the post: > AI-generated slop his non-technical boss is generating It’s his boss. The type of boss who happily generates AI slop is likely to be the type of person who wants things done their way. The employee doesn’t have the power to block the merge if the boss wants it, thus the conversation on why it shouldn’t be merged needs to be considerably longer (or they need to quit). | | |
| ▲ | AuthAuth a day ago | parent [-] | | Why would he merge any code from his non technical boss? Writing code obviously isnt his role so I dont know why he would expect his code to be getting merged all of a sudden. Just straight up tell him this isnt useful please stop. | | |
| ▲ | latexr a day ago | parent [-] | | > Writing code obviously isnt his role so I dont know why he would expect his code to be getting merged all of a sudden. Because people don’t act rationally most of the time, and there are many who slurp the LLM hype to believe they can do anything with it better than humans. And there are plenty of bad bosses who believe whatever they decide is right and must be done. > Just straight up tell him this isnt useful please stop. We’re reading about a story second-hand. It is naive to think we have the solution for this person, that he should “just” refuse or do whatever. We’re not privy to every detail from the situation, and should not assume the developer on the other side is an idiot. From the story as told it seems he is frustrated by the situation, don’t you think if he could simply refuse, he’d had done it already? Don’t think of others you don’t know as automatic imbeciles, we don’t know the full story. |
|
| |
| ▲ | mholm 2 days ago | parent | prev [-] | | "You're absolutely right— This code works by [...]" | | |
| ▲ | padjo 2 days ago | parent [-] | | If it ever stops leading with a cheery affirmation we’re doomed. |
|
|
|
| ▲ | bwfan123 2 days ago | parent | prev | next [-] |
| > The amount of [mental] energy needed to refute ~bullshit~ [AI slop] is an order of magnitude bigger than that needed to produce it I see this in code-reviews where AI tools like code-rabbit and greptile are producing workslop in enormous quantities. It is sucking up enormous amount of human energy just reading the nicely formatted bs put out by these tools. All of that for finding an occasional nugget that turns out to be useful. |
|
| ▲ | oblio 2 days ago | parent | prev | next [-] |
| https://www.joelonsoftware.com/2000/05/26/reading-code-is-li... https://www.joelonsoftware.com/2000/04/06/things-you-should-... (read the bold text in the middle of the article) These articles are 25 years old. |
| |
| ▲ | fzeroracer 2 days ago | parent [-] | | Sadly, I've seen multiple well-known developers here on HN argue that reading code in fact isn't hard and that it's easy to review AI-generated code. I think fundamentally what AI-generated code is doing is exposing the cracks in many, many engineers across the board that either don't care about code quality or are completely unable to step back and evaluate their own process to see if what they're doing is good or not. If it works it works and there's no need to understand why or how. | | |
| ▲ | g_sch 2 days ago | parent | next [-] | | I think this is equally true of writing. Once you see something written one way, it's very hard to imagine other ways of writing the same thing. The influence of anchoring bias is quite strong. A strong editor is able to overcome this anchoring bias, imagine alternative approaches to the same problem, and evaluate them against each other. This is not easy and requires experience and practice. I am starting to think that a lot of people who "co-write" with ChatGPT are seriously overestimating their own editing skills. | |
| ▲ | Zanfa 2 days ago | parent | prev [-] | | Reviewing code is basically applying the Chesterton’s fence principle to everything. With AI code there’s typically so much incidental noise that trying to identify intention is a challenge. But then again I’ve found a lot of people are not bothered by overly convoluted code that is the equivalent of using a hammer for screws either… | | |
| ▲ | lazide 2 days ago | parent [-] | | Worse - there is no actual intention, so attempting to grok it from the code is even more wasted energy. You have to nitpick everything, because there is no actual meaningful aim that is consistent. I ran across an outsourcer that did the same thing about 20 years ago (near as I could tell he was randomly cutting and pasting random parts of stack overflow answers until it compiled!). We got him away from the code base/fired ASAP because he was an active threat to everyone. |
|
|
|
|
| ▲ | RobinL 2 days ago | parent | prev | next [-] |
| I largely agree. As a counterpoint, today I delivered a significant PR that was accepted easily by the lead dev with the following approach: 1. Create a branch and vibe code a solution until it works (I'm using codex cli) 2. Open new PR and slowly write the real PR myself using the vibe code as a reference, but cross referencing against existing code. This involved a fair few concepts that were new to me, but had precedent in the existing code. Overall I think my solution was delivered faster and of at least the same quality as if I'd written it all by hand. I think its disrespectful to PR a solution you don't understand yourself. But this process feels similar to my previous non-AI assisted approach where I would often code spaghetti until the feature worked, and then start again and do it 'properly' once I knew the rough shape of the solution |
| |
| ▲ | cruffle_duffle 2 days ago | parent [-] | | The best way I’ve found to use LLM’s for writing anything that matters is, after feeding it the right context, take its output and then retype it in your own words. Then the LLM has helped capture your brain dump and organize it but by forcing yourself to write it rather than copy and paste… you get to make it your own. This technique has worked quite well with domains I’m not the best at yet like marketing copy. I want my shit to have my own voice but I’m not sure what to cover… so let the LLM help me with what to cover and then I can rewrite its work. |
|
|
| ▲ | jjgreen 2 days ago | parent | prev [-] |
| Ship it! |