| ▲ | the_af 10 hours ago |
| When do you need to spellcheck or polish an HN comment? I've never, ever, ever ever ever, seen anybody complain about spelling mistakes in a comment here. As long as you can understand the comment, people respond to it. |
|
| ▲ | Kim_Bruning 10 hours ago | parent | next [-] |
| Extend spellcheck to asking questions like "does it meet HN rules" "how can I improve my writing" etc. Though these are the kinds of questions that do at very least still meet the spirit of the rule, I suppose. |
| |
| ▲ | the_af 10 hours ago | parent [-] | | Do you really need an automated tool to tell you whether you're breaking common sense guidelines? And why would you want to "improve your writing" for an HN comment? I think people here value raw authenticity more than polished writing. | | |
| ▲ | BeetleB 9 hours ago | parent | next [-] | | > Do you really need an automated tool to tell you whether you're breaking common sense guidelines? Lots of people break HN guidelines. I see it virtually every day. > And why would you want to "improve your writing" for an HN comment? Some people like to write well regardless of the medium. Why is that a problem for you? > I think people here value raw authenticity more than polished writing. Classic false dichotomy. Asking an LLM for feedback is not making your comment less authentic. As I pointed out elsewhere, it can make your comment more authentic by ensuring that what you had in your head and what you wrote match. Go and study writing and psychology. For anything of value, it's rare that your first attempt reflects what you meant to say. It's also rare that the first attempt, even if it reflects what you meant, will not be absorbed by the recipient. Saying what you mean, and having it understood as you meant it, is a difficult skill. | | |
| ▲ | the_af 9 hours ago | parent [-] | | > Lots of people break HN guidelines. I see it virtually every day. Yes, and AI won't help here. People will use AI to better break the guidelines. > Go and study writing and psychology Is this a case where you should have read the guidelines? Maybe an LLM could have helped you here? Please don't send me study anything, you know what they say of ASSuming. > Some people like to write well regardless of the medium. Why is that a problem for you? HN is more like talking than writing. And LLMs don't help you write well, they help you sound like a clone, which is unwanted. > For anything of value, it's rare that your first attempt reflects what you meant to say. You can always edit your comment. And in any case, HN is like a live conversation. Imagine if your friend AI-edited their speech in real-time as they talked to you. | | |
| ▲ | Kim_Bruning 9 hours ago | parent | next [-] | | Depends on how you use the AI. if you use it a bit like you'd ask a human to proof-read your work, AI can actually be quite helpful. The other important thing you can do is have an AI check your claims before you post. Even with google and pubmed, a quick check against sources by hand can take 30 minutes or longer, while with AI tooling it takes 5. Guess which one is more likely to actually lead to people checking their facts before they post. (even if imperfectly!) . I'm not talking about people who lazily ask the AI to write their post for them. Or those who don't actually go through and actually get the AI to find primary sources. Those people are not being as helpful. Though try consider educating them on more responsible tool use as well? | | |
| ▲ | the_af 4 hours ago | parent [-] | | To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling. I don't think that's what this new HN guideline is against either. What I object is the AI writing your comments for you. I want to engage with other human beings, not the bot-mediated version of them. | | |
| ▲ | BeetleB 2 hours ago | parent | next [-] | | > To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling. > I don't think that's what this new HN guideline is against either. This is actually how many commenters here are interpreting it, though - and that's what I'm pushing back against. They are actively advocating against using LLMs this way. I don't have the LLM write the comment for me. I (sometimes) give it my draft, along with all the parents to the root, and get feedback. I look for specific things (Am I being too argumentative? Am I invoking a logical fallacy? Is it obvious I misinterpreted a comment that I'm replying to? Is my comment confusing? etc). Adding things like (Am I violating an HN guideline?) are fair game. Earlier today I wrote a lot of comments without using the LLM's feedback. In one particular thread I repeatedly misunderstood the original context of the discussion and wasted people's time. I reposted my draft to the LLM and it alerted me of my problematic comment. Had I used it originally, I would have saved a lot of people time. Incidentally, since I started doing this (a few months ago), I've only edited my comment once or twice based on its feedback. Most of the time it just tells me my comment looks good. | |
| ▲ | yellowapple 4 hours ago | parent | prev [-] | | The problem is that there's a vast range of values between “using AI to research/hone your arguments” v. “AI writing your comments for you”, and between the rule itself and dang's various remarks on it, where exactly the rule draws the line is about as clear as mud. |
|
| |
| ▲ | BeetleB 8 hours ago | parent | prev [-] | | > Yes, and AI won't help here. People will use AI to better break the guidelines. AI is a general purpose tool. People will use AI for multiple reasons, including yours. I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number. > HN is more like talking than writing. Says you. Many disagree. > And LLMs don't help you write well, they help you sound like a clone, which is unwanted. Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this. > Imagine if your friend AI-edited their speech in real-time as they talked to you. When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended. | | |
| ▲ | the_af 4 hours ago | parent [-] | | > I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number. I don't know how comparatively challenging, I only know your use case is now (fortunately!) against HN rules. > Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this. It's not false. It's one of the major reasons people have come to dislike AI written comments and articles. It all ends up sounding the same. > When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended. In real life? Sounds like a fucking dystopia. But everyone is free to choose the hell they want to live in. |
|
|
| |
| ▲ | tonyarkles 10 hours ago | parent | prev [-] | | > Do you really need an automated tool to tell you whether you're breaking common sense guidelines? I say this on behalf of all of my neurospicy friends… sometimes, yes. Especially having taken a look at the whole list of guidelines, I definitely am friends with people who would could struggle to determine whether a given comment fits or not. |
|
|
|
| ▲ | BeetleB 9 hours ago | parent | prev | next [-] |
| People who are particular about spelling do not want to write misspelled words! It's not about whether you/others will tolerate it. I have my standards, and I hold to them. I personally don't use an LLM to spellcheck (browser spellcheck works fine), but I see no problem with someone using an LLM to point out spelling errors. And while I don't complain about others' spelling errors, I sure do notice them. And if someone writes a long wall of text as one giant paragraph that has lots of spelling/grammatical issues, chances are very high I won't read it. Some people write very poorly by almost any standard. If an LLM helps the person write better, I'm all for it. There's a world of a difference between copy/pasting from the LLM and asking it for feedback. |
| |
| ▲ | the_af 9 hours ago | parent [-] | | > I have my standards, and I hold to them. Spellcheckers exist, you don't need an AI to change your voice. Also, if you have standards, you can always train yourself to spell better! | | |
| ▲ | BeetleB 8 hours ago | parent [-] | | > Spellcheckers exist, you don't need an AI to change your voice. How is using an AI to spell check changing my voice? Yes, thank you - I know spellcheckers exist, as my comment clearly states. The amusing thing is that an LLM who had access to the thread would have alerted you to a basic error you're making. > Also, if you have standards, you can always train yourself to spell better! "You can always ..." is not an argument against alternatives. | | |
| ▲ | the_af 4 hours ago | parent [-] | | Calm down. You're getting defensive, but it's not warranted. I'm not attacking you. > The amusing thing is that an LLM who had access to the thread would have alerted you to a basic error you're making. I didn't make the "basic error" of assuming you didn't know spellcheckers existed. I was stressing that since spellcheckers already exist, you don't need an AI assisting your comments-writing. Much basic, non-style-altering alternatives exist and are better. > "You can always ..." is not an argument against alternatives. The argument I'm making is that if you care so much about standards you can always hone them yourself instead of taking the lazy way out of having an AI write for you. Alternatively, if you're lazy then your standards aren't too high. And yes, this is an argument against the alternative you're suggesting. | | |
| ▲ | yellowapple 3 hours ago | parent [-] | | > The argument I'm making is that if you care so much about standards you can always hone them yourself instead of taking the lazy way out of having an AI write for you. It's pretty clear that in this case the use of AI is not a matter of laziness, but rather quality/consistency assurance. I use code formatters not because I'm too lazy to indent code myself, but because it helps guarantee that it's formatted consistently. I use a stud finder when mounting things to walls not because I'm too lazy to do the “knock on the wall” trick, but because the stud finder is more precise and reliable at it. I don't use AI to edit my comments, but if I did, it would be not because I'm too lazy to check for all the things I want to avoid putting in my comments, but as an extra layer of assurance on top of what I've already trained myself to do. |
|
|
|
|
|
| ▲ | vova_hn2 10 hours ago | parent | prev | next [-] |
| I think that people subconsciously perceive grammatically correct and stylistically appropriate writing as more authoritative. And author is perceived as smarter and/or better educated person. At least that was the case before LLMs became a thing, now I'm not sure anymore. |
|
| ▲ | bryanlarsen 10 hours ago | parent | prev | next [-] |
| Obvious spelling mistakes are usually ignored, but there are certain types of writing mistakes that really trigger the type of people that frequent HN. For example, use "literally" for exaggeration rather than in the original meaning of the word and you'll likely trigger somebody. |
| |
| ▲ | the_af 9 hours ago | parent [-] | | I never seen this, unless "literally" really clashed with the intent of the comment (as in, it changed the meaning). It's against the HN guidelines to focus on punctuation, spelling, etc, as long as the comment is understood. And, in any case, it's now against the guidelines to write using an AI :) | | |
| ▲ | bryanlarsen 4 hours ago | parent [-] | | Perhaps not for the word "literally", but you've never seen anybody make a pedantic correction about word usage? | | |
| ▲ | the_af 4 hours ago | parent [-] | | To be clear, I've seen it in the wild, but not here where it's discouraged to pick on words instead of focusing on the substance of what's being said. | | |
| ▲ | bryanlarsen 4 hours ago | parent | next [-] | | Here's a better example. Use "a few bad apples" wrong, and you'll likely get a response. A few bad apples will cause the entire barrel to spoil rapidly, so a few bad apples is a big deal. But it's often used to say the opposite, that a few bad apples isn't a big deal. | |
| ▲ | bryanlarsen 4 hours ago | parent | prev [-] | | I wish I had posted a better example, but I couldn't recall anything at the moment and still can't. It's usually a more interesting complaint than the old man shaking fist at clouds of the usage of the word literally. | | |
| ▲ | the_af 4 hours ago | parent [-] | | OK, but let's dig deeper. Would you prefer to be corrected on some logical fallacy/mistake you made in your argument, by another human being (and yes, maybe get slightly upset about it, we're human beings after all), or have both sides present bot-mediated iron-clad comments, like operators sparring with robots? I prefer the raw, flawed human version. Even if, yes, I make a silly, avoidable mistake, or get upset, or make you upset in the heat of the argument. Maybe when I cool down I will have learned something. I don't want flawless robotic arguments. I want human beings. (Fuck, that last bit sounded like an AI-ism, but I promise it's me, a human!). |
|
|
|
|
|
|
| ▲ | cogman10 10 hours ago | parent | prev [-] |
| I've been hit by spelling/grammar noise once or twice. Those are usually downvoted and/or flagged. |
| |
| ▲ | everybodyknows 9 hours ago | parent [-] | | Typos like an/as, of/or, an/and waste the reader's time. That some care be taken to avoid them is no more than common courtesy. |
|