| ▲ | aeturnum 5 hours ago |
| I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up. Writing and programming are both a form of working at a problem through text and when it goes well other practitioners of the form can appreciate its shape and direction. With AI you can get a lot of 'function' on the page (so to speak) but it's inelegant and boring. I do think AI is great at allowing you not to write the dumb boiler plate we all could crank out if we needed to but don't want to. It just won't help you do the innovative thing because it is not innovative itself. |
|
| ▲ | Uehreka 5 hours ago | parent | next [-] |
| > Writing and programming are both a form of working at a problem through text… Whoa whoa whoa hold your horses, code has a pretty important property that ordinary prose doesn’t have: it can make real things happen even if no one reads it (it’s executable). I don’t want to read something that someone didn’t take the time to write. But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do). Really elegant code is cool to read, but many tools I use daily are closed source, so I have no idea if their code is elegant or not. I only care if it works. |
| |
| ▲ | fhd2 5 hours ago | parent | next [-] | | Users typically don't read code, developers (of the software) do. If it's not worth reading something where the writer didn't take the time to write it, by extension that means nobody read the code. Which means nobody understands it, beyond the external behaviour they've tested. I'd have some issues with using such software, at least where reliability matters. Blackbox testing only gets you so far. But I guess as opposed to other types of writing, developers _do_ read generated code. At least as soon as something goes wrong. | | |
| ▲ | tptacek 5 hours ago | parent | next [-] | | Developers do not in fact tend to read all the software they use. I have never once looked at the code for jq, nor would I ever want to (the worst thing I could learn about that contraption is that the code is beautiful, and then live out the rest of my days conflicted about my feelings about it). This "developers read code" thing is just special pleading. | | |
| ▲ | hexaga 4 hours ago | parent | next [-] | | You're a user of jq in the sense of the comment you're replying to, not a developer. The developer is the developer _of jq_, not developers in general. | | |
| ▲ | fhd2 4 hours ago | parent | next [-] | | Yes, that's exactly how I meant it. I might _rarely_ peruse some code if I'm really curious about it, but by and large I just trust the developers of the software I use and don't really care how it works. I care about what it does. | |
| ▲ | Stefan-H 2 hours ago | parent | prev | next [-] | | They can be “a developer” and use jq as a component within what they are developing. They do not need to be a developer of jq to have reason to look at jq’s code. | |
| ▲ | tptacek 4 hours ago | parent | prev [-] | | We're talking about Show HN here. |
| |
| ▲ | orwin 2 hours ago | parent | prev | next [-] | | But you read your coworkers PRs. I decided this week I wouldn't read/correct the AIgen doc and unit tests from 3 of my coworkers today, because else I would never be able to work. They produce twice as much poor output in 10 time the number of line change, that's too much. | | |
| ▲ | tptacek 2 hours ago | parent [-] | | Right, I'm not arguing developers don't read their own code or their teammates code or anything that merges to main in a repo they're responsible for. Just that the "it's only worth reading if someone took the time to actually write it" objection doesn't meaningfully apply to code in Show HN's --- there's no expectation that code gets read at all. That's why moderation is so at pains to ensure there's some way people can play with whatever it is being shown ("sign up pages can't be Show HN's"). |
| |
| ▲ | dingnuts 4 hours ago | parent | prev [-] | | [dead] |
| |
| ▲ | ozim 3 hours ago | parent | prev [-] | | Key part is *where reliability matters*, there are not that many cases where it matters. We tell stories of Therac 25 but 90% of software out there doesn’t kill people. Annoys people and wastes time yes, but reliability doesn’t matter as much. E-mail, internet and networking, operations on floating point numbers are only kind of somewhat reliable. No one is saying they will not use email because it might not be delivered. | | |
| ▲ | iugtmkbdfil834 2 hours ago | parent | next [-] | | << 90% of software out there doesn’t kill people. As we give more and more autonomy to agents, that % may change. Just yesterday I was looking at hexapods and the first thing it tells you ( with a disclaimer its for competitions only ) that it has a lot of space for weapon install. I had to briefly look at the website to make sure I did not accidentally click on some satirical link. | | |
| ▲ | ozim 2 hours ago | parent [-] | | Main point is that there is many more lines of code of CRUD business apps running on AWS and instances of applications than even non-autonomous car software even though we do have lots of cars. |
| |
| ▲ | cobbal 2 hours ago | parent | prev | next [-] | | We guarantee 5 nines of uptime, and 1 nine of not killing people | |
| ▲ | wussboy 2 hours ago | parent | prev [-] | | Most code will not kill people, but a lot of code could kill a business. |
|
| |
| ▲ | aeturnum an hour ago | parent | prev | next [-] | | > even if no one reads it I gotta disagree with you there! Code that isn't read doesn't do anything. Code must be read to be compiled, it must be read to be interpreted, etc. I think this points to a difference in our understanding of "read" means, perhaps? To expand my pithy "not gonna read if you didn't write" bit: The idea that code stands on its own is a lie. The world changes around code and code must be changed to keep up with the world. Every "program" (is the git I run the same as the git you run?) is a living document that people maintain as need be. So when we extend the "not read / didn't write" it's not using the program (which I guess is like taking the lessons from a book) it's maintaining the program. So I think it's possible that I could derive benefit from someone else reading an llm's text output (they get an idea) - but what we are trying to talk about is the work of maintaining a text. | | |
| ▲ | williamcotton an hour ago | parent [-] | | Correct me if I’m wrong, but if you wrote a dependency-free recursive descent parser in C89 thirty years ago it should still compile and return the same AST. |
| |
| ▲ | nicbou 4 hours ago | parent | prev | next [-] | | It makes sense. A vibe-coded tool can sometimes do the job, just like some cheap Chinese-made widget. Not every task requires hand-crafted professional grade tools. For example, I have a few letter generators on my website. The letters are often verified by a lawyer, but the generator could totally be vibe-coded. It's basically an HTML form that fills in the blanks in the template. Other tools are basically "take input, run calculation, show output". If I can plug in a well-tested calculation, AI could easily build the rest of the tool. I have been staunchly against using AI in my line of work, but this is an acceptable use of it. | |
| ▲ | 1shooner 5 hours ago | parent | prev | next [-] | | >Code has a pretty important property that ordinary prose doesn’t have But isn't this the distinction that language models are collapsing? There are 'prose' prompt collections that certainly make (programmatic) things happen, just as there is significant concern about the effect of LLM-generated prose on social media, influence campaigns, etc. | |
| ▲ | zahlman 2 hours ago | parent | prev | next [-] | | > it can make real things happen even if no one reads it (it’s executable). "One" is the operative word here, supposing this includes only humans and excludes AI agents. When code is executed, it does get read (by the computer). Making that happen is a conscious choice on the part of a human operator. The same kind of conscious choice can feed writing to an LLM to see what it does in response. That is much the same kind of "execution", just non-deterministic (and, when given any tools beyond standard input and standard output, potentially dangerous in all the same ways, but worse because of the nondeterminism). | |
| ▲ | JohnMakin 5 hours ago | parent | prev | next [-] | | Sometimes (or often) things with horrible security flaws "work" but not in the way that they should and are exposing you to risk. | | |
| ▲ | rescripting 5 hours ago | parent [-] | | If you refuse to run AI generated code for this reason, then you should refuse to run closed source code for the same reason. | | |
| ▲ | JohnMakin 5 hours ago | parent [-] | | I don't see how the two correlate - commercial, closed source software usually have teams of professionals behind them with a vested and shared interest in not shipping crap that will blow up in their customers' face. I don't think the motivations of "guy who vibe coded a shitty app in an afternoon" are the same. And to answer you more directly, generally, in my professional world, I don't use closed source software often for security reasons, and when I do, it's from major players with oodles of more resources and capital expenditure than "some guy with a credit card paid for a gemini subscription." |
|
| |
| ▲ | ethmarks 2 hours ago | parent | prev | next [-] | | I guess it depends on whether you're only executing the code or if you're submitting it for humans to review. If your use case is so low-stakes that a review isn't required, then vibe coding is much more defensible. But if code quality matters even slightly, such that you need to review the code, then you run into the same problems that you do with AI-generated prose: nobody wants to read what you couldn't be bothered to write. | | |
| ▲ | josephg 2 hours ago | parent [-] | | There’s lots of times where I just don’t care how it’s implemented. I got Claude to make a test suite the other day for a couple RFCs so I could check for spec compliance. It made a test runner and about 300 tests. And an html frontend to view the test results in a big table. Claude and I wrote 8500 lines of code in a day. I don’t care how the test runner works, so long as it works. I really just care about the test results. Is it finding real bugs? Well, we went though the 60 or so failing tests. We changed 3 tests, because Claude had misunderstood the rfc. The rest were real bugs. I’m sure the test runner would be more beautiful if I wrote it by hand. But I don’t care. I’ve written test runners before. They’re not interesting. I’m all for beautiful, artisanal code. I love programming. But sometimes I just want to get a job done. Sometimes the code isn’t for reading. It’s for running. |
| |
| ▲ | arscan 5 hours ago | parent | prev | next [-] | | > But I’ll gladly use a tool someone had an AI write, as long as it works (which these things increasingly do). It works, sure, but is it worth your time to use? I think a common blind spot for software engineers is understanding how hard it is to get people to use software they aren’t effectively forced to use (through work or in order to gain access to something or ‘network effects’ or whatever). Most people’s time and attention is precious, their habits are ingrained, and they are fundamentally pretty lazy. And people that don’t fall into the ‘most people’ I just described, probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need. UNLESS it’s something very novel that came from a bit of innovation that LLMs are incapable of. But that bit isn’t what we are talking about here, I don’t think. | | |
| ▲ | josephg an hour ago | parent | next [-] | | > It works, sure, but is it worth your time to use? This is something I like about the LLM future. I get to spend my time with users thinking about their needs and how the product itself could be improved. The AI can write all the CSS and sql queries or whatever to actually implement those features. If the interesting thing about software is the code itself - like the concepts and so on, then yeah do that yourself. I like working with CRDTs because they’re a fun little puzzle. But most code isn’t like that. Most code just needs to move some text from over here to over there. For code like that, it’s the user experience that’s interesting. I’m happy to offload the grunt work to Claude. | |
| ▲ | mindcrime 4 hours ago | parent | prev | next [-] | | probably won’t want to use software you had an LLM write up when they could have just done it themselves to meet their exact need Sure... to a point. But realistically, the "use an LLM to write it yourself" approach still entails costs, both up-front and on-going, even if the cost may be much less than in the past. There's still reason to use software that's provided "off the shelf", and to some extent there's reason to look at it from a "I don't care how you wrote it, as long as it works" mindset. came from a bit of innovation that LLMs are incapable of. I think you're making an overly binary distinction on something that is more of a continuum, vis-a-vis "written by human vs written by LLM". There's a middle ground of "written by human and LLM together". I mean, the people building stuff using something like SpecKit or OpenSpec still spend a lot of time up-front defining the tech stack, requirements, features, guardrails, etc. of their project, and iterating on the generated code. Some probably even still hand tune some of the generated code. So should we reject their projects just because they used an LLM at all, or ?? I don't know. At least for me, that might be a step further than I'd go. | | |
| ▲ | arscan 4 hours ago | parent [-] | | > There's a middle ground of "written by human and LLM together". Absolutely, but I’d categorize that ‘bit’ as the innovation from the human. I guess it’s usually just ongoing validation that the software is headed down a path of usefulness which is hard to specify up-front and by definition something only the user (or a very good proxy) can do (and even they are usually bad at it). | | |
| ▲ | mindcrime 3 hours ago | parent [-] | | > but I’d categorize that ‘bit’ as the innovation from the human. Agreed. |
|
| |
| ▲ | bigbuppo 3 hours ago | parent | prev [-] | | Yeah, sure, you could create a social media or photo-sharing site, but most people that want to share cat photos with their friends could just as easily print out their photos and stick them in the mail already. |
| |
| ▲ | pixl97 5 hours ago | parent | prev | next [-] | | Hell, I'd read an instruction manual that AI wrote as long as it accurately describes. I see a lot of these discussions where a person gets feelings/feels mad about something and suddenly a lot of black and white thinking starts happening. I guess that's just part of being human. | |
| ▲ | panny 37 minutes ago | parent | prev | next [-] | | >but many tools I use daily are closed source I wonder if this is a major differentiator between AI fans and detractors. I dislike and actively avoid anything closed source. I fully agree with the premise of the submission as well. | |
| ▲ | rubslopes 4 hours ago | parent | prev | next [-] | | I agree with your sentiment, and it touches on one of the reasons I left academia for IT. Scientific research is preoccupied with finding the truth, which is beautiful but very stressful. If you're a perfectionist, you're always questioning yourself: "Did I actually find something meaningful, or is it just noise? Did I gaslight myself into thinking I was just exploring the data when I was actually p-hacking the results?" This took a real toll on my mental health. Although I love science, I'm much happier building programs. "Does the program do what the client expects with reasonable performance and safety? Yes? Ship it." | |
| ▲ | exit 4 hours ago | parent | prev | next [-] | | similarly, i think that something that someone took the time to proof-read/verify can be of value, even if they did not directly write it. this is the literary equivalent of compiling and running the code. | |
| ▲ | morgoths_bane 2 hours ago | parent | prev [-] | | > I only care if it works Okay but it is probably not going to be a tool that will be reliable or work as expected for too long depending on how complex it is, how easily it can be understood, and how it can handle updates to libraries, etc. that it is using. Also, what is our trust with this “tool”? E.g. this is to be used in a brain surgery that you’ll undergo, would you still be fine with using something generated by AI? Earlier you couldn’t even read something it generated, but we’ll trust a “tool” it created because we believe it works? Why do we believe it will work? Because a computer created it? That’s our own bias towards computing that we assume that it is impartial but this is a probabilistic model trained on data that is just as biased as we are. I cannot imagine that you have not witnessed these models creating false information that you were able to identify. Understanding their failure on basic understandings, how then could we trust it with engineering tasks? Just because “it works”? What does that mean and how can we be certain? QA perhaps but ask any engineer here if companies are giving a single shit about QA while they’re making them shove out so much slop, and the answer is going to be disappointing. I don’t think we should trust these things even if we’re not developers. There isn’t anyone to hold accountable if (and when) things go wrong with their outputs. All I have seen AI be extremely good at is deceiving people, and that is my true concern with generative technologies. Then I must ask, if we know that its only effective use case is deception, why then should I trust ANY tool it created? Maybe the stakes are quite low, maybe it is just a video player that you use to watch your Sword and Sandal flicks. Ok sure, but maybe someone uses that same video player for an exoscope and the data it is presenting to your neurosurgeon is incorrect causing them to perform an action they otherwise would have not done if provided with the correct information. We should not be so laissez-faire with this technology. |
|
|
| ▲ | uean 5 hours ago | parent | prev | next [-] |
| > I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up. Amen to that. I am currently cc'd on a thread between two third-parties, each hucking LLM generated emails at each other that are getting longer and longer. I don't think either of them are reading or thinking about the responses they are writing at this point. |
| |
| ▲ | overtone1000 5 hours ago | parent | next [-] | | Honest conversation in the AI era is just sending your prompts straight to each other. | | |
| ▲ | multjoy 4 hours ago | parent | next [-] | | I mean one thing we have learnt from Epstein is that the 'elite' don't spend much time crafting the perfect email! | | |
| ▲ | rurp 2 hours ago | parent [-] | | Very true, and it's not just creepy elites either. Before I got into tech I worked a blue collar job that involved zero emailing. When I first started office work I was so incredibly nervous about how to write emails and would agonize over trivial details. Turns out just being clear and concise is all most people care about. There might be other professions where people get more hung up on formalities but my partner works in a non-tech field and it's the same way there. She's far more likely to get an email dashed off with a sentence fragment or two than a long formal message. She has learned that short emails are more likely to be read and acted on as well. |
| |
| ▲ | insanewow 4 hours ago | parent | prev [-] | | [dead] |
| |
| ▲ | rkomorn 5 hours ago | parent | prev | next [-] | | It's bad enough they didn't bother to actually write it, but often it seems like they also didn't bother to read it either. | |
| ▲ | cranberryturkey 5 hours ago | parent | prev [-] | | This is the dark comedy of the AI communication era — two LLMs having a conversation with each other while their human operators have already checked out. The email equivalent of two answering machines leaving messages for each other in the 90s. The real cost isn't the tokens, it's the attention debt. Every CC'd person now has to triage whether any of those paragraphs contain an actual decision or action item. In my experience running multiple products, the signal-to-noise ratio in AI-drafted comms is brutal. The text looks professional, reads smoothly, but says almost nothing. I've started treating any email over ~4 paragraphs the same way I treat Terms of Service — skim the first sentence of each paragraph and hope nothing important is buried in paragraph seven. | | |
| ▲ | supriyo-biswas 5 hours ago | parent | next [-] | | > the signal-to-noise ratio in AI-drafted comms is brutal This is also the case for AI generated projects btw, the backend projects that I’ve been looking at often contains reimplementations of common functionality that already exists elsewhere, such as in-memory LRU caches when they should have just used a library. | |
| ▲ | dhdaadhd 2 hours ago | parent | prev [-] | | oh the irony |
|
|
|
| ▲ | madcaptenor 5 hours ago | parent | prev | next [-] |
| The short version of "I am not interested in reading something that you could not be bothered to actually write" is "ai;dr" |
|
| ▲ | techblueberry 5 hours ago | parent | prev | next [-] |
| What's interesting is how AI makes this problem worse but not actually "different", especially if you want to go deep on something. Like listicles were always plentiful, even before AI, but inferior to someone in substack going deep on a topic. AI generated music will be the same way, there's always been an excessive abundance of crap music, and now we'll just have more more of it. The weird thing is how it will hit the uncanny valley. Potentially "Better" than the crap that came before it, but significantly worse than what someone who cares will produce. DJing is an interesting example. Compared with like composition, Beatmatching is "relatively" easy to learn, but was solved with CD turntables that can beatmatch themselves, and yet has nothing to do with the taste you have to develop to be a good DJ. |
| |
| ▲ | TheOtherHobbes 2 hours ago | parent [-] | | In other words, AI partially solves the technique problem, but not the taste problem. In the arts the differentiators have always been technical skill, technical inventiveness, original imagination, and taste - the indefinable factor that makes one creative work more resonant than another. AI automates some of those, often to a better-than-median extent. But so far taste remains elusive. It's the opposite of the "Throw everything in a bucket and fish out some interesting interpolation of it by poking around with some approximate sense of direction until you find something you like" that defines how LLMs work. The definition of slop is poor taste. By that definition a lot of human work is also slop. But that also means that in spite of the technical crudity, it's possible to produce interesting AI work if you have taste and a cultivated aesthetic, and aren't just telling the machine "make me something interesting based on this description." |
|
|
| ▲ | furyofantares 5 hours ago | parent | prev | next [-] |
| > "I am not interested in reading something that you could not be bothered to actually write" At this point I'd settle if they bothered to read it themselves. There's a lot of stuff posted that feels to me like the author only skimmed it and expects the masses to read it in full. |
|
| ▲ | enobrev 5 hours ago | parent | prev | next [-] |
| I feel like dealing with robo-calls for the past couple years had led me to this conclusion a bit before this boom in ai-generated text. When I answer my phone, if I hear a recording or a bot of some sorts, I hang up immediately with the thought "if it were important, a human would have called". I've adjusted this slightly for my kid's school's automated notifications, but otherwise, I don't have the time to listen to robots. |
| |
| ▲ | hananova 3 hours ago | parent [-] | | Robocalls nowadays tend to wait for you to break dead air before they start playing the recording (I don't know why.) So I've recently started not speaking immediately when someone calls me, and if after 10 seconds the counterparty hasn't said something I hang up. |
|
|
| ▲ | CuriouslyC 4 hours ago | parent | prev | next [-] |
| The truth is now that nobody will bother to read anything you write AI or not mostly, creating things is like buying a lottery ticket in terms of audience. Creating something lovingly by hand and pouring countless hours into it is like a golden lottery ticket that has 20x odds, but if it took 50x longer to produce, you're getting significantly outperformed by people who just spam B+ content. |
|
| ▲ | doomslayer999 5 hours ago | parent | prev | next [-] |
| Exactly, I think perplexity had the right idea of where to go with AI (though obviously fumbled execution). Essentially creating more advanced primitives for information search and retrieval. So it can be great at things we have stored and need to perform second order operations on (writing boilerplate, summarizing text, retrieving information). |
|
| ▲ | UltraSane 4 hours ago | parent | prev | next [-] |
| It actually makes a lot more sense to share the LLM prompt you used than the output because it is less data in most cases and you can try the same prompt in other LLMs. |
|
| ▲ | giancarlostoro 5 hours ago | parent | prev | next [-] |
| Except its not. What's a programmer without a vision? Code needs vision. The model is taking your vision. With writing a blog post, comment or even book, I agree. |
|
| ▲ | 5 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | popalchemist 2 hours ago | parent | prev [-] |
| For now. |