| |
| ▲ | BeetleB 9 hours ago | parent | next [-] | | > Do you really need an automated tool to tell you whether you're breaking common sense guidelines? Lots of people break HN guidelines. I see it virtually every day. > And why would you want to "improve your writing" for an HN comment? Some people like to write well regardless of the medium. Why is that a problem for you? > I think people here value raw authenticity more than polished writing. Classic false dichotomy. Asking an LLM for feedback is not making your comment less authentic. As I pointed out elsewhere, it can make your comment more authentic by ensuring that what you had in your head and what you wrote match. Go and study writing and psychology. For anything of value, it's rare that your first attempt reflects what you meant to say. It's also rare that the first attempt, even if it reflects what you meant, will not be absorbed by the recipient. Saying what you mean, and having it understood as you meant it, is a difficult skill. | | |
| ▲ | the_af 9 hours ago | parent [-] | | > Lots of people break HN guidelines. I see it virtually every day. Yes, and AI won't help here. People will use AI to better break the guidelines. > Go and study writing and psychology Is this a case where you should have read the guidelines? Maybe an LLM could have helped you here? Please don't send me study anything, you know what they say of ASSuming. > Some people like to write well regardless of the medium. Why is that a problem for you? HN is more like talking than writing. And LLMs don't help you write well, they help you sound like a clone, which is unwanted. > For anything of value, it's rare that your first attempt reflects what you meant to say. You can always edit your comment. And in any case, HN is like a live conversation. Imagine if your friend AI-edited their speech in real-time as they talked to you. | | |
| ▲ | Kim_Bruning 8 hours ago | parent | next [-] | | Depends on how you use the AI. if you use it a bit like you'd ask a human to proof-read your work, AI can actually be quite helpful. The other important thing you can do is have an AI check your claims before you post. Even with google and pubmed, a quick check against sources by hand can take 30 minutes or longer, while with AI tooling it takes 5. Guess which one is more likely to actually lead to people checking their facts before they post. (even if imperfectly!) . I'm not talking about people who lazily ask the AI to write their post for them. Or those who don't actually go through and actually get the AI to find primary sources. Those people are not being as helpful. Though try consider educating them on more responsible tool use as well? | | |
| ▲ | the_af 4 hours ago | parent [-] | | To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling. I don't think that's what this new HN guideline is against either. What I object is the AI writing your comments for you. I want to engage with other human beings, not the bot-mediated version of them. | | |
| ▲ | BeetleB 2 hours ago | parent | next [-] | | > To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling. > I don't think that's what this new HN guideline is against either. This is actually how many commenters here are interpreting it, though - and that's what I'm pushing back against. They are actively advocating against using LLMs this way. I don't have the LLM write the comment for me. I (sometimes) give it my draft, along with all the parents to the root, and get feedback. I look for specific things (Am I being too argumentative? Am I invoking a logical fallacy? Is it obvious I misinterpreted a comment that I'm replying to? Is my comment confusing? etc). Adding things like (Am I violating an HN guideline?) are fair game. Earlier today I wrote a lot of comments without using the LLM's feedback. In one particular thread I repeatedly misunderstood the original context of the discussion and wasted people's time. I reposted my draft to the LLM and it alerted me of my problematic comment. Had I used it originally, I would have saved a lot of people time. Incidentally, since I started doing this (a few months ago), I've only edited my comment once or twice based on its feedback. Most of the time it just tells me my comment looks good. | |
| ▲ | yellowapple 3 hours ago | parent | prev [-] | | The problem is that there's a vast range of values between “using AI to research/hone your arguments” v. “AI writing your comments for you”, and between the rule itself and dang's various remarks on it, where exactly the rule draws the line is about as clear as mud. |
|
| |
| ▲ | BeetleB 8 hours ago | parent | prev [-] | | > Yes, and AI won't help here. People will use AI to better break the guidelines. AI is a general purpose tool. People will use AI for multiple reasons, including yours. I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number. > HN is more like talking than writing. Says you. Many disagree. > And LLMs don't help you write well, they help you sound like a clone, which is unwanted. Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this. > Imagine if your friend AI-edited their speech in real-time as they talked to you. When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended. | | |
| ▲ | the_af 4 hours ago | parent [-] | | > I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number. I don't know how comparatively challenging, I only know your use case is now (fortunately!) against HN rules. > Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this. It's not false. It's one of the major reasons people have come to dislike AI written comments and articles. It all ends up sounding the same. > When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended. In real life? Sounds like a fucking dystopia. But everyone is free to choose the hell they want to live in. |
|
|
| |
| ▲ | tonyarkles 10 hours ago | parent | prev [-] | | > Do you really need an automated tool to tell you whether you're breaking common sense guidelines? I say this on behalf of all of my neurospicy friends… sometimes, yes. Especially having taken a look at the whole list of guidelines, I definitely am friends with people who would could struggle to determine whether a given comment fits or not. |
|