| ▲ | the_af 4 hours ago | |
To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling. I don't think that's what this new HN guideline is against either. What I object is the AI writing your comments for you. I want to engage with other human beings, not the bot-mediated version of them. | ||
| ▲ | BeetleB 2 hours ago | parent | next [-] | |
> To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling. > I don't think that's what this new HN guideline is against either. This is actually how many commenters here are interpreting it, though - and that's what I'm pushing back against. They are actively advocating against using LLMs this way. I don't have the LLM write the comment for me. I (sometimes) give it my draft, along with all the parents to the root, and get feedback. I look for specific things (Am I being too argumentative? Am I invoking a logical fallacy? Is it obvious I misinterpreted a comment that I'm replying to? Is my comment confusing? etc). Adding things like (Am I violating an HN guideline?) are fair game. Earlier today I wrote a lot of comments without using the LLM's feedback. In one particular thread I repeatedly misunderstood the original context of the discussion and wasted people's time. I reposted my draft to the LLM and it alerted me of my problematic comment. Had I used it originally, I would have saved a lot of people time. Incidentally, since I started doing this (a few months ago), I've only edited my comment once or twice based on its feedback. Most of the time it just tells me my comment looks good. | ||
| ▲ | yellowapple 4 hours ago | parent | prev [-] | |
The problem is that there's a vast range of values between “using AI to research/hone your arguments” v. “AI writing your comments for you”, and between the rule itself and dang's various remarks on it, where exactly the rule draws the line is about as clear as mud. | ||