| ▲ | Someone1234 10 hours ago |
| "AI-edited comments" is a very interesting one. Where is the line between a spelling/grammar/tone checker like Grammarly, that at minimum use N-Grams behind the scenes, and something that is "AI" edited? What I am asking is, is "AI" in this context fully featured LLMs, or anything that improves communication via an automated system. I think many people have used these "advanced" spellcheckers for years before Chatgpt et al came on the scene. I think "generated comments" is a pretty hard line in the sand, but "AI-edited" is anything but clear-cut. PS - I think the idea behind these policies is positive and needed. I'm simply clarifying where it begins and ends. |
|
| ▲ | dang 8 hours ago | parent | next [-] |
| You're touching on an important point. More here: https://news.ycombinator.com/item?id=47342616. All this stuff is in flux. I thought a lot about whether to add the "edited" bit - but it may change. What I deliberately left out was anything about the articles and projects that get submitted here. There's a lot of turbulence in that area too, but we don't yet have clarity, or even an inkling, of how to settle that one. Edit: what I mean is this: while most of those submissions aren't very interesting, some really are. Here's an example from earlier today: Show HN: Vanilla JavaScript refinery simulator built to explain job to my kids - https://news.ycombinator.com/item?id=47338091 How do we close the aperture for the lame stuff while opening wider for the good stuff? That is far from clear. |
| |
| ▲ | dataflow 6 hours ago | parent | next [-] | | Do the guidelines also disallow comments along the lines of "according to <AI>, <blah>"? (I ask this given that "according to a Google search, <blah>" is allowed, AFAIK.) | | |
| ▲ | BeetleB 5 hours ago | parent | next [-] | | I would lean towards disallowing those. With "According to a Google search ...", someone can ask for specific links (and indeed, people often say to link to those sources to begin with instead of invoking Google). With "According to AI ... " - why would most readers care what the AI thinks? It's not a reliable source! You might as well say "According to a stranger I just met and don't know ..." If you're going to say that the AI said X, Y, Z, provide a rationale on why it is relevant. If you merely found X, Y and Z compelling, feel free to talk about it without mentioning AI. | | |
| ▲ | dataflow 3 hours ago | parent [-] | | For reference, the point here isn't to say "what AI thinks", but what you found with the help of AI. The majority of the cases where I would say "according to AI, <blah>" are where <blah> actually does cite sources that I feel appear plausible. Sometimes they're links, sometimes they would be other publications not necessarily a click away. Sometimes I could independently verify them by spending half an hour researching (which is), sometimes I can't. Sometimes I could spend half an hour verifying them independently, sometimes I can't do that but they still seem worthwhile. > If you merely found X, Y and Z compelling, feel free to talk about it without mentioning AI. I think you're seeing this as too black-and-white, and missing the heart of the issue. The purpose of mentioning AI is to convey the level of (un)certainty as accurately as possible. The most accurate way to do that would often be to mention any use of AI, rather than hiding it. If AI tells me that it believes X is true because of links A and B that it cites, and I find those links compelling, then I absolutely want to mention that AI gave me those links because I have no clue whether the model had any reason to bias itself toward those sources, or whether alternate links may have existed that stated otherwise. Whereas if a normal web search just gives links that mention terms from my query, then I get a chance to see the other links too, and I end up being the one who actually compare the contents of the different pages and figure out which one is most convincing. Depending on various factors, such as the nature of the question and the level of background knowledge I have on the topic myself, one of these can provide a more useful response than the other -- but only if I convey the uncertainty around it accurately. | | |
| ▲ | BeetleB 3 hours ago | parent [-] | | > The majority of the cases where I would say "according to AI, <blah>" are where <blah> actually does cite sources that I feel appear plausible. Sometimes they're links, sometimes they would be other publications not necessarily a click away. Sometimes I could independently verify them by spending half an hour researching (which is), sometimes I can't. In my experience, LLMs hallucinate citations like crazy. Over 50% of the times I've checked, the citation either didn't exist, or it did but didn't support the LLM's assertions. This is true not just from the chat, but for Google AI summaries. When the references are more often wrong than not, you can understand why many will simply downvote you for bringing LLM citations into the conversation. Why quote a habitual liar? (If you look at my other comments, I'm actually in favor of using LLMs in some capacity for HN comments. Just not in this case.) | | |
| ▲ | dataflow 2 hours ago | parent [-] | | >> actually does cite sources that I feel appear plausible. > In my experience, LLMs hallucinate citations like crazy. Over 50% of the times I've checked, the citation either didn't exist, or it did but didn't support the LLM's assertions. Note that those are specifically not the cases where the AI is citing "sources that I feel appear plausible." (I also don't find over 50% hallucination to be accurate for Google AI summaries in my experience, but that depends on your queries, and in any case, I digress...) > When the references are more often wrong than not, you can understand why many will simply downvote you for bringing LLM citations into the conversation. Why quote a habitual liar? To be clear, I do understand both sides of the argument, and I don't think either side is unreasonable. I've also had the experience of being on both sides of this myself, and I don't think there's a clear-cut answer. I'm just hoping to get clarity on what the new policy is as far as this goes. I'm sure it'll be reevaluated either way as time goes on. |
|
|
| |
| ▲ | MetaWhirledPeas 4 hours ago | parent | prev | next [-] | | I don't have a problem with that. First off it's not very common. Second off it can add to a conversation, just as it can with in-person discussions. If you feel like it doesn't, don't upvote and don't reply. There's no value in pretending we're Woodward and Bernstein every time we leave a comment. | |
| ▲ | dang 28 minutes ago | parent | prev | next [-] | | We don't want people copy-pasting in comments generally. Summary comments, onlyquote comments (i.e. consisting of a quote and nothing else), duplicate comments are other examples of this. It's not specific to LLMs. However, that's probably not critical enough to formally add to the explicit guidelines, so it's probably fine to leave it in the "case law" realm—especially because downvoters tend to go after such comments. | |
| ▲ | yellowapple 5 hours ago | parent | prev | next [-] | | I think those should be allowed iff the nature of being AI-generated is relevant to the topic of discussion — e.g. if we're talking about whether some model or other can accurately respond to some prompt and people feel inclined to try it themselves. | | |
| ▲ | lossyalgo 5 hours ago | parent [-] | | I constantly read those comments and I personally have conflicting opinion with them. On one hand, it's interesting to compare what is coming out of models, but on the other hand, LLMs are all non-deterministic, so results will be fairly random. On top of that, everybody has a different "skill" level when prompting. In addition, models are constantly changing, therefore "I asked chatGPT and it said..." means nothing when there is a new version every few months, not to mention you can often pick one of 10+ flavors from every provider, and even those are not guaranteed to not be changed under the hood to some degree over time. |
| |
| ▲ | crossroadsguy 4 hours ago | parent | prev | next [-] | | I'd rather ask AI to provide a source and then cite the source. But if the source itself is AI backed, then it's a bit different :) | | |
| ▲ | dataflow 3 hours ago | parent [-] | | I explained this in a bit more depth in an adjacent reply (feel free to take a look) but obtaining the source from AI doesn't achieve the same thing. For example, there might be other links that contradict that source, which the AI wouldn't cite. Knowing that AI picked the "best" one vs. a human is incredibly relevant when assigning and weighing credibility. |
| |
| ▲ | snowwrestler 5 hours ago | parent | prev | next [-] | | Citations can be helpful. But AI summaries and Google searches are poor citations because they are not primary sources. | |
| ▲ | dfxm12 4 hours ago | parent | prev [-] | | AI is not a source. A Google search result page is not a source. Hopefully, these things help you find a source. If you're posting something you feel the need to source, post the source along with your comment! For example, don't say "according to a Google search, x"... say something like "according to Microsoft's documentation, x" and provide a link to Microsoft Learn page... |
| |
| ▲ | crossroadsguy 4 hours ago | parent | prev | next [-] | | I wasn't sure whether it was an omission or an unintended gap, as the guideline specifically points to "comments". So it seems AI generated/edited posts are fine. Strange, because both can be flagged/downvoted if it was to be left with that. | | |
| ▲ | dang 30 minutes ago | parent [-] | | I'm not saying they're all fine, I'm saying we don't yet have any idea of where to make a cut. The comments thing is a lot more intimate in the sense that anyone posting comments is inside the house. |
| |
| ▲ | schappim 7 hours ago | parent | prev [-] | | Please rethink the “edited” bit on accessibility grounds. I have a kid with severe written language issues, and the utilisation of speech to text with a LLM-powered edit has unlocked a whole world that was previously inaccessible. I would hate to see a culture that discourages AI assistance. | | |
| ▲ | dang 23 minutes ago | parent | next [-] | | That's totally legit and your kid, should they ever take an interest in Hacker News, is welcome here. These rules are always fuzzy and there's always a long tail of exceptions. All the more so under turbulent conditions like right now. I wrote more about this elsewhere in the thread, in case it's useful: https://news.ycombinator.com/item?id=47342616. | |
| ▲ | davorak 6 hours ago | parent | prev | next [-] | | Are you up for sharing details? > I would hate to see a culture that discourages AI assistance. Mostly I think the push back is about ai assistance in its current form. It can get in the way of communicating rather than assisting. The cost though is mostly borne by the readers and those not using the AI for assistance. I have seen this happen when the ai adds info and thoughts that were tangental to the original author and I think, but I can not verify times where an author seems to try to dig down on the details but seemingly can not. | |
| ▲ | BeetleB 7 hours ago | parent | prev | next [-] | | Oh wow. I did not anticipate that, which is embarrassing given that I wrote this just recently: https://news.ycombinator.com/item?id=47326351 Yes, please at least have a carveout for accessibility. I definitely have dictated HN comments in the past, and my flow uses LLMs to clean it up. It works, and is awesome when you're in pain. | |
| ▲ | happytoexplain 6 hours ago | parent | prev | next [-] | | Since it's mostly a good-faith rule to begin with, it seems easy to add something like, "unless you are using it as an assistive technology for accessibility reasons". | | |
| ▲ | dang 21 minutes ago | parent [-] | | Yes, and that's the case with all the rules. I don't want to say "you should break them when it makes sense" because if I do, someone will post "Tell HN: dang says break the rules". But the rules are there to serve the intended spirit of the site—not the other way around. If you're posting in that spirit, I would hope we would recognize and and welcome that, not tut-tut it with rules. |
| |
| ▲ | pesfandiar 7 hours ago | parent | prev [-] | | Hear hear. And like many other aspects of accessibility, it will help a huge number of people who may not have any severe issues. e.g. non-native English speakers using LLM-powered edits. |
|
|
|
| ▲ | jaysonelliot 10 hours ago | parent | prev | next [-] |
| You should use your own words. It might seem that a tool like Grammarly is just an advanced spellcheck, but what it's really doing is replacing your personal style of writing with its own. It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better." Language is an incredibly nuanced thing, it's best for people's own thoughts to come through exactly as they have written them. |
| |
| ▲ | bruckie 9 hours ago | parent | next [-] | | My elementary school kid came home yesterday and showed me a piece of writing that he was really proud of. It seemed more sophisticated than his typical writing (like, for example, it used the word "sophisticated"). He can be precocious and reads a ton, though, so it was still plausible that he wrote it. I asked him some questions about the writing process to try to tease out what happened, and he said (seemingly credibly) that he hadn't copied it from anywhere or referenced anything. He also said he didn't use any AI tools. After further discussion, I found out that Google Docs Smart Compose (suggested-next-few-words feature) is enabled by default on his school-issued Chromebook, and he had been using it. The structure of the writing was all his, but he said he sometimes used the Smart Compose suggestions (and sometimes didn't). He liked a lot of the suggestions and pressed tab to accept them, which probably bumped up the word choice by several grade levels in some places. So yeah, it can change the character of your writing, even if it's just relatively subtle nudges here or there. edit: we suggested that he disable that feature to help him learn to write independently, and he happily agreed. | | |
| ▲ | Terr_ 9 hours ago | parent | next [-] | | To rationalize my gut-feelings on this, I think it comes down to the spectrum between: 1. A system that suggests words, the child learns the word, determines whether it matches their intent, and proceeds if they like the result. 2. A system that suggests words, and the child almost-blindly accepts them to get the task over with ASAP. The end-results may look the same for any single short document, but in the long run... Well, I fear #2 is going to be way more common. | | |
| ▲ | zahlman 8 hours ago | parent | next [-] | | The analogy with tab-completion of code seems apt. At first you blindly accept something because it has at least as good a chance of working as what you would have typed. Then you start to pay attention, and critically evaluate suggestions. Then you quickly if not blindly accept most suggestions, because they're clearly what you would have written anyway (or close enough to not care). The phenomenon was observed in religious philosophy over a millennium ago (https://terebess.hu/zen/qingyuan.html). | | |
| ▲ | abustamam 8 hours ago | parent [-] | | Tab completion was so novel back when full e2e AI tooling was not really effective. Now that it is, I just turn tab completion off totally when I write code by hand. It's almost never right. | | |
| ▲ | skydhash 7 hours ago | parent [-] | | Emacs has completion (but you can bind it to tab). The nice thing is that you can change the algorithm to select what options come up. I’ve not set it to auto, but by the time I press the shortcut, it’s either only one option or a small sets. |
|
| |
| ▲ | bruckie 8 hours ago | parent | prev | next [-] | | From his description, it sounded like this was more of #1. He cared a lot about the topic he was writing about, and has high standards for himself, so it's very likely that he would have considered and rejected poor suggestions. I have mixed feeling about it. On the one hand, you're right: carefully considering suggestions can be a learning opportunity. On the other hand, approval is easier than generation, and I suspect that without flexing the "come up with it from scratch" muscle frequently, that his mind won't develop as much. | |
| ▲ | yellowapple 5 hours ago | parent | prev [-] | | #1 would be a net improvement over the status quo IMO. Seems like a great way for people to expand their vocabularies organically. | | |
| ▲ | lossyalgo 5 hours ago | parent [-] | | That reminds me of one of the biggest IMO missing feature of Wordle: They never give a definition of the word after the game is finished! I usually do end up googling words I don't know (which is quite often) but I'm guessing I'm one of the few who goes to the trouble. I've even written to The New York Times a couple times to suggest adding a short definition at the end as I honestly feel like a ton of people could totally up their vocabulary game and it surely could be added with minimum effort (considering they even added a Discord multiplayer mode). | | |
| ▲ | Terr_ 6 minutes ago | parent | next [-] | | Is Wordle really the best vehicle for that, though? I mean, it tends towards a subset of 5-letters words the audience is more likely to know in advance, excluding a lot of the more-surprising words. A "click to see more about why this answer fits" crossword, on the other hand... | |
| ▲ | yellowapple an hour ago | parent | prev [-] | | That's a brilliant idea and now that you've mentioned it it seems like a rather glaring omission. |
|
|
| |
| ▲ | comboy 9 hours ago | parent | prev | next [-] | | Oh how I despise these suggestions. You sometimes look for a way to express something and you are on the verge of giving the world something truly original, but as soon as your brain sees the suggestion it goes "oh yeah that fits" | | |
| ▲ | SchemaLoad 7 hours ago | parent | next [-] | | I disabled them immediately, it feels like the tech version of the ADHD person who keeps interrupting you with what they think you are trying to say. Even if the suggestion is correct, it saves you at most 2 seconds at the cost of interrupting you constantly. | |
| ▲ | Terr_ 9 hours ago | parent | prev | next [-] | | True! There's an important cybernetic aspect to all this, where an automatic suggestion can be an interruption, sometimes worse if the suggestion is decent. A certain amount of friction is necessary, at least if the goal is to help the person learn or make something original. | |
| ▲ | lossyalgo 5 hours ago | parent | prev | next [-] | | I look forward to reading studies in 10 years how we all became stupider thanks to this "feature". One step closer to the movie Idiocracy. | |
| ▲ | TimTheTinker 9 hours ago | parent | prev | next [-] | | GK Chesterton would have something brilliant to say about the inauthenticity of it all or something. | |
| ▲ | jrockway 9 hours ago | parent | prev | next [-] | | I see the suggestions and then choose something different anyway. I don't want to use one of the top 3 most popular responses to an email from a friend. Even if it's something transactional. | |
| ▲ | JumpCrisscross 9 hours ago | parent | prev [-] | | > I despise these suggestions As an adult, I do too. As a middle schooler, we absolutely used word processors’ thesaurus features to add big words to our essays because the teachers liked them. | | |
| ▲ | Gibbon1 9 hours ago | parent [-] | | Friend of mine was a English teacher. She quit because she's not going to waste her time 'grading' 30 essays written by AI. Anyway before that she HATED the thesaurus. And she could tell when students were using it to make their writing more fancy pants. | | |
| ▲ | zahlman 8 hours ago | parent | next [-] | | One problem I see is that LLMs have a more nuanced... well, model of how words and their meanings relate to each other than a dead-tree thesaurus could ever present, what with its simplified "synonym" and "antonym" categories. Online versions try to give some similarity metrics, but don't get into the nuance. (It's not as if someone who takes either approach would want to spend the time reading and understanding that, anyway.) | |
| ▲ | tigen 2 hours ago | parent | prev | next [-] | | In-class essays impossible? Pencil to paper? | |
| ▲ | JumpCrisscross 9 hours ago | parent | prev [-] | | > she could tell when students were using it to make their writing more fancy pants I had two teachers who called us out on this, and actually coached us on our writing, and I remember them fondly. (They were also fans of in-class essaying.) The others wanted to count big words. |
|
|
| |
| ▲ | 9 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | ma2kx 8 hours ago | parent | prev | next [-] | | As a non native English speaker my own words wouldnt be in English. If I express myself in English I soon struggle for the right words. On the other hand I think when I read some English text I'm quite capable of sensing the nuances. So it feels when I auto translate my text to English an than read against it again and make some corrections, I can express my thoughts much better. | |
| ▲ | comboy 9 hours ago | parent | prev | next [-] | | My broken english now officially bumps my comments up instead of down. Sweet. | | |
| ▲ | zahlman 8 hours ago | parent [-] | | For what it's worth, I had a quick look through your comment history and your English seems just fine to me as a native speaker (at least for informal communication). | | |
| ▲ | ziml77 7 hours ago | parent [-] | | People who don't have English as their first language often seem to underestimate how good their English actually is. I wonder if it's because their reference point is formal English rather than the much more forgiving English we use in casual day-to-day conversation. |
|
| |
| ▲ | lamontcg 9 hours ago | parent | prev | next [-] | | Books and newspapers have had editors for centuries. It is just code review for the written word. [It looks like MS Word 97 had the ability to detect passive voice as well, so we're talking 30 year old technology there that predates LLMs -- how far down the Butlerian Jihad are we going with this?] | | |
| ▲ | MeetingsBrowser 9 hours ago | parent [-] | | Editors are mostly tasked with maintaining a consistent style and standard. There is no need for that here beyond maybe spellcheck. Use your own thoughts, voice, and words. | | |
| ▲ | lamontcg 9 hours ago | parent [-] | | I don't personally use AI/LLMs for any informal writing here or on reddit, etc. But I think it is pretty weird to be overly concerned around people, particularly ESL, who use tools to clean up their writing. The only thing I really care about is when someone posts LLM regurgitated information on topics they personally don't know anything about. If the information is coming from the human but the style and tone is being tweaked by a machine to make it more acceptable/receptive and fix the bugs in it, then I don't understand why you're telling me I need to care and gatekeeping it. It also is unlikely to be very detectable, and this thread seems to only serve a performative use for people to get offended about it. | | |
| ▲ | pseudalopex 9 hours ago | parent [-] | | Other tools to clean up writing are allowed. They did not tell you you must care. You told them they must not. The submisson's use was to tell you and others LLM generated tone was not more acceptable. | | |
| ▲ | lamontcg 8 hours ago | parent [-] | | Well good luck detecting it. | | |
| ▲ | davorak 6 hours ago | parent [-] | | If it never gets in the way of the humans communicate it probably won't be an issue. That is the reading I have of the rule and Dang's comments > HN is for conversation between humans. If it is enhancing that instead of detracting and wasting peoples time it does not seem to be against the spirt of the rules. | | |
| ▲ | yellowapple 5 hours ago | parent [-] | | Except the letter of the rule makes it verboten even “if it never gets in the way of the humans communicate”. | | |
| ▲ | davorak 3 hours ago | parent [-] | | > HN has always been a spirit-of-the-law place, and—contrary to the "technically correct is the best correct" mentality that many of us share—we consciously resist the temptation to make them too precise. That is from dang's post in:
https://news.ycombinator.com/item?id=47342616 That whole post is clarifying for the intent of the new rule(s). | | |
| ▲ | yellowapple an hour ago | parent [-] | | The problem with “spirit-of-the-law” is that having rules be subject to discretion is a pretty clear avenue for discrimination and abuse. Not as big of a deal for an Internet forum as it would be for, say, a country's legal code and the enforcement thereof, but the lack of a clear standard for a rule makes that rule hard to follow and harder to enforce impartially. |
|
|
|
|
|
|
|
| |
| ▲ | NewsaHackO 9 hours ago | parent | prev | next [-] | | >It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better." It is definitely not true that it is better for a poster to communicate like an individual when it comes to spelling and grammar. People ignore posts that have poor grammar or spelling mistakes, and communications that have poor grammar are seen as unprofessional. Even I do it at a semi-subconscious level. The more difficult or the more amount of attention someone has to pay to understand your post, the less people will be willing to put in that effort to do so. | | |
| ▲ | RevEng 2 hours ago | parent [-] | | Exactly. Tell that to whoever is grading your next paper, or reviewing your resume, or watching your presentation. People are judged by their linguistic ability even in cases where it shouldn't matter. It's a well known heuristic bias. It's no surprise that many of the people here denying it are themselves quite literate. |
| |
| ▲ | mjg2 9 hours ago | parent | prev | next [-] | | I was just re-reading the passage from Plato's "The Phaedrus" on writing & the "art" of the letter for an essay I'm working on, and your remark is salient for this discussion on LLM-style AI and social media at large. | | | |
| ▲ | davebranton 7 hours ago | parent | prev | next [-] | | Precisely. As I wrote in my assessment of AI for my workplace; "Your unique human voice is more valuable than a thousand prompt-driven LLM doggerels." | |
| ▲ | jjk166 7 hours ago | parent | prev | next [-] | | > It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better." Language is an incredibly nuanced thing, it's best for people's own thoughts to come through exactly as they have written them. This is the opposite of how language works. You want people to understand the idea you're trying to communicate, not fixate on the semantics of how you communicated. Language is like fashion - you only want to break the rules deliberately. If AI or an editor or whatever changes your writing to be more clear and correct, and you don't look at it and say "no, I chose that phrasing for a reason" then the editor's version is much more likely to be understood correctly by the recipient. | |
| ▲ | Aldipower 10 hours ago | parent | prev | next [-] | | That's true, but on the flip side I regularly get downvoted because my English is not the best, so say it mildly. So, now I need to be really careful, to a) write in a good English or b) not to be recognised as an LLM corrected version of my English. Where is the line? I shouldn't be downvoted for my English I think, but that is the reality. Edit:
I already got downvoted. :-) Sure, no one can tell exactly why. Maybe the combination of bad English _and_ talking sh*ce isn't ideal at all. :-D Anyways, I have enough karma, so I can last quite a while.. | | |
| ▲ | ssl-3 9 hours ago | parent | next [-] | | It goes both ways. The quality of my writing varies (based on my mood as much as anything else, I suppose), but when it is particularly good and error-free then I often get accused of being a bot. Which is absurd, since I don't use the bot for writing at all. | |
| ▲ | 8 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | colpabar 9 hours ago | parent | prev | next [-] | | > I shouldn't be downvoted for my English I think, but that is the reality. How do you know? Is it possible the downvoters just didn't like what you said? | | |
| ▲ | phs318u 9 hours ago | parent [-] | | It’s possible of course but reading all the comments from various non-native English speakers here it seems like a common story. It may indicate a subliminal bias in readers (most of whom are presumably American). | | |
| ▲ | yorwba 9 hours ago | parent [-] | | Note that those comments are written in perfectly understandable English. Further note how often you come across comments written in perfectly understandable English, but they're downvoted anyway. It suggests a bias in writers to assume that people would agree with them if only they could express their thoughts accurately. |
|
| |
| ▲ | 9 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | Teever 9 hours ago | parent | prev | next [-] | | But the problem is that people with poor written language / english skills are 'competing' with people who have superb skills in this domain. There are people here who sit at a desk all day banging out multipage emails for work who decide to write posts of a similar linguistic calibre for funsies. Meanwhile you have someone in a developing country who just got off a brutal twelve hour shift doing manual labour in the sun who wants to participate in the conversation with an insightful message that they bang-out on a shitty little cellphone onscreen keyboard while riding on bumpy public transit. You could have a great idea and express it poorly and be penalized for doing so here while someone could have a blah idea expressed excellently and it's showered in replies despite being in some metrics (the ones I think are most important) worse than the other post. What's the solution for that? | | |
| ▲ | magicalist 9 hours ago | parent | next [-] | | > What's the solution for that? Remember that you're on a message board and you're not actually 'competing' for anything? | | |
| ▲ | Teever 8 hours ago | parent [-] | | This is a perfect example of what I'm talking about. I knew someone was going to comment on my use of the word there despite me putting it in quotes which was intended to let the reader know that I meant that word as an approximation of what I was meaning. When I say competing I mean competing in the space of ideas here. There is a ranking system here that raises or lowers the visibility and prominance of your comments and it's based on upvotes by other uses. For better or worse people penalize comments with grammatical errors over ones that don't and that affects how much exposure other users have to the ideas that people write and how much interaction they get from them. If that's the case why would somebody who has good ideas but poor expressive capability bother posting here if their comments are just going to get ignored over relatively vapid comments that are grammatically correct? | | |
| ▲ | davorak 6 hours ago | parent | next [-] | | > If that's the case why would somebody who has good ideas but poor expressive capability bother posting here if their comments are just going to get ignored over relatively vapid comments that are grammatically correct? The main problem is that ai consistently is seeing making things worse. Take a look at the examples in Dang's link in their comment:
https://news.ycombinator.com/item?id=47342616 In the ones I read the AI editing is either hurting or needs to be much, much better to help. | |
| ▲ | NewsaHackO 8 hours ago | parent | prev [-] | | No, I get your point. Unfortunately, alot of people here try to act high and mighty like they are posting here for some altruistic reason. The reason why I, you, and everyone else posts here is the human reason that we want others to engage with our posts. In order to do that, you have to put your best foot forward, which includes making sure the spelling and grammar of your posts is correct. While I do not use an LLM for this, I think that it is vaild to use these tools to make sure nothing gets in the way of whatever point you are trying to make. | | |
| ▲ | Teever 8 hours ago | parent [-] | | > In order to do that, you have to put your best foot forward In English. You have to put your best foot forward in English. And in your environment with the resources you have at your disposal. For example, I'm currently engaging with you between steps in a chemistry process that's happening under the fumehood next to me while wearing a respirator, a muggy plastic chemical resistant gown and disposable gloves nitrile globes. I am absolutely certain that these conditions are different than the ones I would need to 'put my best food forward' in this discussion. I'm also certain that quite certain that you and I would both absolutely stumble if we were obligated to particpate in this forum in a language that we're not proficient in as many users often attempt to do and are unfairly penalized for by other members of the community. I'm with you on the LLM usage for grammatical issues for non-native speakers. I bet more in this community would feel the same way if Dang whimsically mandated that people had to use a language other than English on certain days of the week. | | |
| ▲ | fragmede 6 hours ago | parent [-] | | Oh shit that would be fun. Tuesday, we're going to do it in Mongolian, see how that goes. |
|
|
|
| |
| ▲ | 12_throw_away 8 hours ago | parent | prev [-] | | > You could have a great idea and express it poorly and be penalized for doing so here while someone could have a blah idea expressed excellently and it's showered in replies despite being in some metrics (the ones I think are most important) worse than the other post. I absolutely do not understand this comment. Are you saying that posting is competitive and that comments have "metrics"? | | |
| ▲ | fragmede 6 hours ago | parent [-] | | Yes! If my comment is above yours in a thread, it means I got more upvotes than you did, which means I get special bonuses and more to eat and you go hungry in Internet land. Also it means I'm better than you (obviously) and I get to go to this secret club with all the pretty people and you're not invited. Isn't that how this all works? |
|
| |
| ▲ | fragmede 9 hours ago | parent | prev | next [-] | | I disagree. HN is going to bury my raw unedited tirade of a comment about those fucking morons that couldn't code their way out of a paper bag. If I send a comment to ChatGPT and open up the prompt with "this poster is a fucking dumbass, how do I tell them this" and use that to get to a well reasoned response because that's the tool we have available today, we're all better off. The guidelines state: > Be kind. Don't be snarky. Converse
> Edit out swipes.
> Don't be curmudgeonly. On the best of days I manage to follow the rules, but I'm only human. If I run my comment through ChatGPT to try and help me edit out swipes on the bad days, that's not ok? I'm not using ChatGPT to generate comments, but I've got the -4 comments to show that my "thoughts exactly as they have written them" isn't a winning move. | | |
| ▲ | zahlman 8 hours ago | parent | next [-] | | If you see an incompetent coder and wish to communicate that the person responsible is a "fucking moron/dumbass", the tone with which you do so is not the problem. Tell us what is wrong with the code, as objectively as possible. That's what the guidelines are trying to convey. | |
| ▲ | yorwba 9 hours ago | parent | prev [-] | | The guidelines don't say anything about not posting something because an LLM told you that you shouldn't... |
| |
| ▲ | drusepth 9 hours ago | parent | prev [-] | | I'm not sure I agree with this. I don't really want to see someone else's stylistic "warts". I just want clean, easy-to-read content and I don't care about the person who wrote it. A tool like Grammarly is the difference between readable and unreadable (or understandable and understandable) for many people. | | |
| ▲ | timeinput 9 hours ago | parent [-] | | You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read. You could even write a plugin for your favorite web browser to do that to every site you visit. It seems hard to achieve the inverse that is (would you rather I use i.e.?) rewrite this paragraph as the original author did before they had an AI re--write it to make it clean, (--do you like oxford commas, and em/en dashes! Just prompt your AI) and easier to read | | |
| ▲ | phs318u 9 hours ago | parent | next [-] | | > You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read. For those coming from a language other than English, you are more likely to lose information by using a tool to “reconstruct” meaning from poorly phrased English as an input, as opposed to the poster using a tool to generate meaningful English from their (presumably) well-written native language. | |
| ▲ | kazinator 9 hours ago | parent | prev | next [-] | | > You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read. But that creates a private version of the text which the original poster didn't sign off on. You could have fixed something contrary to their intent. | |
| ▲ | tempestn 9 hours ago | parent | prev [-] | | There's a big difference between me running a filter on other people's words, and those people themselves choosing to run one and then approving the results. I personally don't see a problem with someone using a grammar checker as long as they aren't just blindly accepting its suggestions. That said, if someone actually is using it in that way, it shouldn't be detectable anyway, so it probably doesn't matter all that much whether or not it's included in the letter of the rule. |
|
|
|
|
| ▲ | Mordisquitos 9 hours ago | parent | prev | next [-] |
| I think that the line between A"I" editing to fix grammar or to translate from a different native language and A"I" editing by using an LLM is one of those things that's very hard to unambiguously encode in written guidelines, but easy to intuitively understand using common sense, in the vein of I know it when I see it. https://en.wikipedia.org/wiki/I_know_it_when_I_see_it |
|
| ▲ | observationist 9 hours ago | parent | prev | next [-] |
| On a technical level, you can really only guard against changing your semantics and voice - if you're letting software alter the meaning, or meanings, you intend, and use words you don't normally use, it's probably too far. This is probably ok: >> On a technical level, you can really only guard against software that changes your semantics or voice. If you're letting it alter the meaning (or meanings) you intend, or if it starts using words you would never normally use, then it's gone too far. This is probably too far: >>> On a technical level, it's important to recogn1ize that the only robust guardrail we can realistically implement is one that prevents modifications to core semantics or authorial voice. If you're comfortable allowing the system to refine or rephrase the precise meanings you originally intended — or if it begins incorporating vocabulary that doesn't align with your typical linguistic patterns — then you've likely crossed a meaningful threshold where the output no longer fully represents your authentic intent. Something to consider is that you can analyze your own stylometric patterns over a large collection of your writing, and distill that into a system of rules and patterns to follow which AI can readily handle. It is technically possible, albeit tedious, to clone your style such that it's indistinguishable from your actual human writing, and can even icnlude spelling mistakes you've made before at a rate matching your actual writing. AI editing is weird, though. Not seeing a need, unless English isn't your native language. |
|
| ▲ | tsukikage 9 hours ago | parent | prev | next [-] |
| > Where is the line between a spelling/grammar/tone checker like Grammarly For me, the line is precisely at the point where a human has something they want to say. IMO - use the tools you need to say the thing you want to say; it's fine. The thing I, and many others here, object to is being asked to read reams of text that no-one could be bothered to write. |
|
| ▲ | happytoexplain 10 hours ago | parent | prev | next [-] |
| I think there's a pretty clear gap between editing for grammar/spelling and editing for tone. |
| |
| ▲ | RevEng 2 hours ago | parent [-] | | How so and why? I know plenty of people whose writing naturally carries a tone that they don't intend. I often help them to change their wording to be less confrontational or seemingly sarcastic when it isn't meant to be. Would you say it is wrong for them to get assistance to get the tone they intend rather than the one they would tend to write? |
|
|
| ▲ | jacquesm 9 hours ago | parent | prev | next [-] |
| Trying to lawyer this is the wrong approach. When in doubt: don't. |
| |
| ▲ | Someone1234 9 hours ago | parent [-] | | That feels very uncharitable. When a policy is introduced to seemingly guard against new problems, but happens to be inadvertently targeting preexisting and common technology, I don't feel like it is "lawyering" it to want clarity on that line. For example, it could be argued this forbids all spellcheckers. I don't think that is the implied intent, but the spectrum is huge in the spellchecker space. From simple substitutions + rule-based grammar engines through to n-grams, edit-distance algorithms, statistical machine translation, and transformer-based NLP models. |
|
|
| ▲ | unsignedint 9 hours ago | parent | prev | next [-] |
| I think the only practical litmus test here is whether you can stand by the text as your own words. It’s not like we have someone looking over commenters’ shoulders as they type. Ultimately, this comes down to people making a good-faith judgment about how much AI was involved, whether it was just minor grammatical fixes or something more substantial. The reality is that there isn’t really a shared consensus on exactly where that line should be drawn. |
|
| ▲ | altairprime 9 hours ago | parent | prev | next [-] |
| Grammarly use is outright prohibited by this; AI-edited writing is no longer writing that you hold personal and exclusive responsibility for having written. Consider Stephen Hawking’s voice box generator. While the sounds produced were machine-assisted, the writing was his alone. If you find yourself unable to participate in this web forum without paying a proofreader (in time, money, or cycles) to copy-edit your writing, then you’re not welcome on HN as a participant. |
| |
| ▲ | phs318u 9 hours ago | parent [-] | | > If you find yourself unable to participate in this web forum without paying a proofreader (in time, money, or cycles) to copy-edit your writing, then you’re not welcome on HN as a participant. You forgot the /s ? | | |
| ▲ | altairprime 9 hours ago | parent [-] | | It’s not sarcasm. If you feel if I have misunderstood the intent of the guideline we’re discussing — “Don’t post generated/AI-edited comments”, as the title currently reads, then I’m happy to discuss further. (I often make logical negation errors that I miss in proofing, so it’s possible I slipped up, too!) | | |
| ▲ | phs318u 9 hours ago | parent [-] | | I thought it was sarcasm given you are asking people to “pay a proofreader”. This sounds ludicrous. Could you clarify wha you meant by that line if it’s not sarcasm? Because I’m having a hard time thinking that it’s meant to be taken at face value. | | |
| ▲ | altairprime 8 hours ago | parent [-] | | No worries. The post I replied to was asking if use of ‘grammar improvement services’ (my paraphrase) qualified as AI-assisted writing at HN. All such services cost something; Grammarly makes a lot of money charging businesses, AI consumes watts of power that someone pays for, and even Microsoft Word’s grammar checker spins up the CPU fans on an old Intel laptop with a long enough document. I took from that the generic point that one “pays” for machine-assisted proofreading by one means or another, whether it’s trading personal data for services (Google) or watts of power for services (MSWord et al.) or donating writing samples to a for-profit training corpus (Grammarly free tier) or paying for evaluations where your data is not retained for training (Grammarly paid enterprise tier with a carefully-redlined service contract) and generalized to “pay for machine proofreading”. Then, I considered whether HN would appreciate posts/comments by a human where they’d had a PR team or a hired editor come in and review/modify/distort their original words in order to make them more whatever. I think that this probably is most likely to have occurred on the HN jobs posts, and I’ve pointed out especially egregious instances to the mods over the years — but in general, the people who post on HN tend to do so from their own voice’s viewpoint, as reaffirmed by the no-AI-writing guideline above. So I decided instead to say “pay a proofreader” because, bluntly, if the community found out that someone was paying a wage to a worker to proofread their HN comments, the response would plausibly be the same mob of laughing mockery, disgusted outrage, and blatant dismissal that we see today towards AI writing here. “You hired someone to tone-edit your HN comments?!” is no different than “You used Grammarly to tone-edit your HN comments?!” to me, and so it passed the veracity test and I posted it. |
|
|
|
|
|
| ▲ | czhu12 9 hours ago | parent | prev | next [-] |
| Finding it more refreshing these days when reading text with broken grammar, incorrect use of pronouns, etc. especially for HN, the human connection is more palpable. It’s rarely so bad that it’s not understandable |
|
| ▲ | glitch13 10 hours ago | parent | prev | next [-] |
| I saw a similar conversation somewhere about some project saying they don't allow AI generated code. It was asked that if "AI Generated Code" is just code suggested to you by a computer program, where does using the code that your IDE suggests in a dropdown? That's been around for decades. Is it LLM or "Gen AI" specific? If so, what specific aspect of that makes one use case good and one use case bad and what exactly separates them? It's one of those situations where it seems easy to point at examples and say "this one's good and this one's bad", but when you need to write policy you start drowning in minutia. |
| |
| ▲ | kazinator 9 hours ago | parent | next [-] | | Projects cannot allow AI generated code if they require everything to have a clear author, with a copyright notice and license. IDE code suggestions come from the database of information built about your code base, like what classes have what methods. Each such suggestion is a derived work of the thing being worked on. | | |
| ▲ | RevEng 2 hours ago | parent [-] | | That is not correct because it hasn't been tested in court. In past decisions about who owns the output generated by a computer program the owner has been the operator of the program. You own your Word documents and Photoshopped images. There is good reason to believe that LLM output where you provided the prompt would also fit under that umbrella. We are still waiting for that to be tested in court. |
| |
| ▲ | sumeno 7 hours ago | parent | prev [-] | | Nobody is actually confused about what AI generated code means in those cases, they're just trying to be argumentative because they don't like the rules |
|
|
| ▲ | RevEng 2 hours ago | parent | prev | next [-] |
| I agree on the editing. We use these things all the time - chances are many of you are using it right now as you type on your phone and it checks your spelling for you. By the same token, what if I have a human editor help me out? What if we go back and forth on how to write something, including spelling, grammar, tone, etc. For example, my wife occasionally asks me to review her messages before sending them because she thinks I speak well and wants to be understood correctly. The problem is that we are punishing the technology, not the result. Whether it's a human or an LLM that acts as your editor should be irrelevant; what matters is that you are posting your own work and not someone else's. My wife having me write all of her messages for her would be just as dishonest as her having an LLM write all of her messages for her if she always presented them as her own writing. But if she writes the copy and I provide suggests for changes, what's the harm in that? And why should it matter if it's a human or an LLM that provides that assistance? |
|
| ▲ | ern 4 hours ago | parent | prev | next [-] |
| I caught myself structuring a comment like an LLM on another site. It's expected that people who chat heavily to LLMs will start to mirror their styles. |
|
| ▲ | raw_anon_1111 9 hours ago | parent | prev | next [-] |
| There is no need to use any of it. Just use your own words. |
|
| ▲ | asadotzler 4 hours ago | parent | prev | next [-] |
| ML based word or phrase editing is hardly a problem any more than pre-AI spellcheckers were. AI sentence and paragraph manufacturing is a problem and everyone knows the difference between that slop and a spellchecker. No one cares if your editor does inline spellchecking or even word autocomplete. What they care about is slop and word at a time spelling or phrase grammar checking are harmless. |
|
| ▲ | thousand_nights 9 hours ago | parent | prev | next [-] |
| i don't care if someone has bad grammar, i want to hear their thoughts as they came up with them, we're all intelligent beings and can parse the meaning behind what you write. i type my comments without capitalization like i'm typing into some terminal because i'm lazy and people might hate it but i'm sure they prefer this to if i asked an LLM to rewrite what i type your writing style is your personality, don't let a robot take it away from you |
| |
| ▲ | tempestn 9 hours ago | parent [-] | | I, on the other hand, find incorrect grammar mildly annoying, especially when it's due to laziness. It distracts from the thoughts being conveyed. I appreciate when people take the time to format comments as correctly as they're able. In fact, I'd argue that lazy commenting is the real problem, which has now been supercharged by LLMs. |
|
|
| ▲ | skywhopper 9 hours ago | parent | prev | next [-] |
| I don’t think it’s really necessary to play Captain Nitpick over spell-check or whatever. You know what is meant. |
|
| ▲ | SecretDreams 10 hours ago | parent | prev [-] |
| Your comment is one of semantics. Worth discussing if we're talking a truly hard line rule rather than the spirit of the rule. I benefit from my phone flagging spelling errors/typos for me. Maybe it uses AI or maybe it uses a simple dictionary for me. Maybe it might even catch a string of words when the conjunction isn't correct. That's all fair game, IMO. But it shouldn't be rewriting the sentence for me. And it shouldn't be automatically cleaning up my typos for me after I've hit "reply". That's on me. |