Remix.run Logo
dang 9 hours ago

The rule has been around for years, but only in case law, i.e. moderation comments (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). What's new is that we promoted it to the guidelines.

Fortunately I found some things we could cut as well, so https://news.ycombinator.com/newsguidelines.html actually got shorter.

---

Edit: here are the bits I cut:

Videos of pratfalls or disasters, or cute animal pictures.

It's implicit in submitting something that you think it's important.

I hate cutting any of pg's original language, which to me is classic, but as an editor he himself is relentless, and all of those bits—while still rules—no longer reflect risks to the site. I don't think we have to worry about cute animal pictures taking over HN.

---

Edit 2: ok you guys, I hear you - I've cut a couple of the cuts and will put the text back when I get home later.

Wowfunhappy 8 hours ago | parent | next [-]

> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.

> If you flag, please don't also comment that you did.

I don't understand why you cut these, they seem important! (I can understand the others, which feel either implied or too specific.)

dang 8 hours ago | parent [-]

Of course they're important, but they're also implicitly encoded into the culture. Cutting something from the guidelines doesn't mean the rule is canceled. HN has countless rules that don't appear explicitly in https://news.ycombinator.com/newsguidelines.html.

I think I'm going to put that one back, though, because it's not a hill I want to die on and I know what arguing with dozens of people simultaneously feels like when you only have 10 minutes.

Wowfunhappy 8 hours ago | parent | next [-]

> Cutting something from the guidelines doesn't mean the rule is canceled.

Understood, but I feel like I see people breaking these ones frequently, so removing the explicit guideline feels to me like a bad idea.

dang 20 minutes ago | parent [-]

People break them whether they're in the list or not. But don't worry, we'll put that one back.

andai 8 hours ago | parent | prev [-]

I seem to recall a rule about "don't downvote something because you disagree with it", but I can't find anything like that.

Not sure if that's really solvable with rules, though.

My experience with downvotes is that people mostly use it as a "I don't like this" button, which is proxy for "I couldn't think of a counterargument so I don't want to look at it."

(I noted recently that downvotes and counterarguments appear to be mutually exclusive, which I found somewhat amusing.)

Whereas I will often upvote things I personally disagree with, if they are interesting or well reasoned. (This seems objectively better to me, of course, but maybe it's personality thing.)

dang 8 hours ago | parent [-]

Oh that one is a classic case of people 'remembering' a rule that never existed - there's a name for this illusion but I forget what it is.

See https://news.ycombinator.com/item?id=16131314 and https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... for history...

chrisshroba 8 hours ago | parent | next [-]

> 'remembering' a rule that never existed

Probably the Mandela effect!

https://en.wikipedia.org/wiki/False_memory#Mandela_effect

8 hours ago | parent | prev | next [-]
[deleted]
Kye 6 hours ago | parent | prev [-]

This was (maybe still is) part of "reddiquette." Like the guidelines and case law here, it often found its way into subreddit rules and comments from moderators.

dang 2 hours ago | parent [-]

To me it's just like how, growing up in Canada, we all assumed we had Miranda rights because we watched American TV.

SegfaultSeagull 8 hours ago | parent | prev | next [-]

> I don't think we have to worry about cute animal pictures taking over HN.

Challenge accepted.

dcminter 8 hours ago | parent | next [-]

The real challenge is to do it in a way that's intellectually stimulating. Mind you The Economist just had an article about the monkey called Punch so all things are possible...

dang 8 hours ago | parent | prev [-]

The laws of unintended consequences and never posting overhastily. You think you know these things and then blam.

Kim_Bruning 8 hours ago | parent | prev | next [-]

I'd be a wee bit cautious with the "AI edited" part of it; since that might exclude a number of people with disabilities or for whom english is a second (or third, or later) language.

My reading is that the intent is to have a human voice behind the text.

Monitor and see how it goes I guess!

dang 8 hours ago | parent | next [-]

I need to say something about this but it might have to be later as I have to run out the door shortly...

The short version is that we included it to protect users who don't realize how much damage they're doing to their reception here when they think "I'll just run this through ChatGPT to fix my grammar and spelling". I've seen many cases of people getting flamed for this and I don't want more vulnerable users—e.g. people worried about their English—to get punished for trying to improve their contributions. Certainly that would apply to disabled users as well, though for different reasons.

Here are some past cases of these interactions: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....

Edit: uni_baconcat makes the point beautifully: https://news.ycombinator.com/item?id=47346032.

Most rules in https://news.ycombinator.com/newsguidelines.html have a lot of grey area, and how we apply them always involves judgment calls. The ones we explicitly list there are mostly so we have a basis for explaining to people the intended use of the site. HN has always been a spirit-of-the-law place, and—contrary to the "technically correct is the best correct" mentality that many of us share—we consciously resist the temptation to make them precise.

In other words yes, that bit needs to be applied cautiously and with care, and in this way it's similar to the other rules. Trying to get that caution and care right is something we work at every day.

edanm 7 hours ago | parent | next [-]

That makes this more ok, IMO. I'm otherwise against "AI-edited" being part of the rules — it's very hard to draw the line (does asking an AI for synonyms of a word count?). AI-editing is especially a valuable tool for non-native-English speakers or similar.

Kim_Bruning 8 hours ago | parent | prev | next [-]

I was close to one such case, and I really appreciate the care and caution you and Tom applied.

BeetleB 7 hours ago | parent | prev | next [-]

Anything I post here is always in my own voice - even when I use an LLM. 95% of the times grammar/spelling is fixed, it's because my brain lapsed while typing, not because I don't know the grammar well and am using LLM to shape my voice.

I would wager that this use case is much more prevalent than ones where the LLM changed the comment significantly enough to change one's voice.

I never copy/paste from an LLM into HN. Everything is typed by myself (and I never "manually" copy LLM content). I don't have any automatic tools for inserting LLM content here.[1]

Always, always, always keep in mind that you don't notice these positive use cases, because they are not noticeable by design. So the problematic "clearly LLM" comments you see may well be a small minority of LLM-assisted comments. Don't punish the (majority) "good" folks to limit the few "bad" ones.

Lastly, I often wish we had a rule for not calling out others' comments as "AI slop" or the like.[2] It just leads to pointless debates on whether an LLM was used and distracts far more than the comment under question. I'm sure plenty of 100% human written comments have been labeled as LLM generated.

[1] The dictation one is a slight exception, and I use it only occasionally when health issues arise.

[2] Probably OK for submissions, but not comments.

Teever 5 hours ago | parent | prev | next [-]

I've thought about fine-tuning a model on the corpus of your HN posts and then offering a service that would allow the user to paste their message into a text box and the Dangified version of their comment would pop out in another box next to it.

I was thinking of calling this service "Dang It."

You say you want hear posts in other people's voices but I'm pretty sure that if I did this that the people who used it would find greater acceptance of their comments than if they just posted them as they originally wrote them.

dang 2 hours ago | parent [-]

I very much hope that's not true, and my guess (or desperate wish?) is that the community would pattern-match to it after a while.

One dynamic I don't think has yet been given its due: while AI is training on us, we're also all getting trained on it—that is, the hivemind's pattern-matching ability is also growing. We're heading up the escalation ladder in a paattern-matching race.

But that name is hilarious!

7 hours ago | parent | prev [-]
[deleted]
gus_massa 7 hours ago | parent | prev | next [-]

As a not native speaker, for me using something like Google Translate is fine, it's literal enough to keep the author voice. [1]

Also writing a draft in Google Docs and accepting most [2] of the corrections is fine. The browser fix the orthography, but I 30% of the time forget to add the s to the verbs. For preposition, I roll a D20 and hope the best.

I'm not sure if these are expert systems, LLM, or pingeonware.

But I don't like when someone use a a LLM to rewrite the draft to make it more professional. It kills the personality of the author and may hallucinate details. It's also difficult to know how much of the post is written was the author and how much autocompleted by the AI:

[1] Remember to check that the technical terms are correctly translated. It used to be bad, but it's quite good now.

[2] most, not all. Sometimes the corrections are wrong.

duskdozer an hour ago | parent [-]

>For preposition, I roll a D20 and hope the best.

This makes me think of something: are nonnative English speakers tempted to use LLMs to correct grammar because mistakes like this actually make the writing unintelligible in their native language? For example, if I swap out the "For" in this sentence for any (?) other preposition, it's still comprehensible. (At|Of|In|By|To|On|With) example, ...

kshacker 8 hours ago | parent | prev [-]

Yes even I posted something recently which was voted down since I mentioned from get go that I used help from AI. But the idea was mine, I wrote the first draft, and then worked with AI in 2-3 loops to get it right.

But like dang said ... I do not have time to fight this battle when I have only 10 minutes :)

abtinf 8 hours ago | parent | prev | next [-]

FWIW I think “Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.” is different from the others.

It’s an instruction for how to use the site. It’s helpful to have it in the guidelines for when the flag feature should be used. Without it, the flag link is much more ominous.

Maybe it could be consolidated with the flag-egregious-comments rule?

Edit to add: IMHO it is not at all obvious on this site that flagging stories is meant to be roughly the equivalent of downvoting comments (and that flagging comments doesn’t have a counterpart at the story level).

dom96 8 hours ago | parent | prev | next [-]

I’m really curious how this will go. I have a suspicion that we will see more and more accounts all over the internet being controlled by AI agents and no amount of moderation will be able to stop it.

lurkshark 8 hours ago | parent | next [-]

I assume we’ll end up with proof-of-identity attestation as a part of public posting (e.g. Worldcoin) which doesn’t necessarily solve the issue but will at least identify patterns more likely to be LLMs (e.g. a firehose of posts at all hours of the day from one identity). Then we’ll enter the dystopia of mandated real identity on the internet

dom96 5 hours ago | parent [-]

I agree. I think that ultimately it will be governments providing services to attest humanity.

They already do to a certain extent via passports. I built a little human verifier using those at https://onlyhumanhub.com

nomel 8 hours ago | parent | prev [-]

Because they've long ago passed the Turing test. Moderation won't be able to stop it because humans increasingly can't detect it.

I see well written people being called "LLM" here all the time, em-dash or not.

nitwit005 8 hours ago | parent | next [-]

Even prior to LLMs, a single comment was rarely enough to identify a bot. Even if nonsensical, there's too little information to separate machine from confused human (plenty of people posting drunk on their phones).

On reddit people sometimes go through the comment history and see that it seems to be a bot, but that's fairly high effort.

jjk166 8 hours ago | parent | prev [-]

The key is to accuse everyone of being an LLM. Those who don't react are bots. Those that fight the charge no matter how much its levied are also bots, but with better programming. Those that complain at first but give up when too much effort is required are the real humans. Any bot able to feel frustration is cool.

nomel 7 hours ago | parent [-]

Maybe a reasonable approach would be that people could flag posts with a "probably AI" button to eventually trigger a "bot test" for that account (currently, the "score 5 in this mini game" type seem pretty clanker proof). If they pass, their posts for the hour, week, whatever result in a "not AI" indicator when someone clicks the "probably AI" button.

zahlman 8 hours ago | parent | prev | next [-]

I suppose I should put my comment here instead of at top level.

Exactly when was this point added? It seems somehow not new, but on the other hand it was missing from an archive.today snapshot I found from last July. (I cannot get archive.org to give me anything useful here.)

Edit:

> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.

> If you flag, please don't also comment that you did.

Perhaps these points (and the thing about trivial annoyances, etc.) should be rolled up into a general "please don't post meta commentary outside of explicit site meta discussion"?

dang 2 hours ago | parent [-]

Do you mean when did we add "please don't post generated comments" to the guidelines? A couple days ago IIRC.

8 hours ago | parent | prev | next [-]
[deleted]
1718627440 8 hours ago | parent | prev | next [-]

Does that mean that is now ok to e.g. comment that you did flag something?

dang 2 hours ago | parent [-]

That is one of those enjoyable questions that is best answered by first generalizing it.

Does the absence of a rule against X mean that it's ok to do X? Absolutely not.

It's impossible to list all the things that people shouldn't do. Fortunately we've never walked into that trap.

minimaxir 9 hours ago | parent | prev | next [-]

...Hacker News could use some more cute animal pictures, though.

dang 2 hours ago | parent | next [-]

Coming up on 20 years and we clearly went too far the other way.

thomassmith65 8 hours ago | parent | prev | next [-]

One problem with cute animal pictures is that they appeal to almost everyone, including people who are incapable, for whatever reason, of posting well-reasoned, interesting, respectful comments. The fact that HN is a little dry makes it less appealing to dumbasses.

At any rate, it's too late. The era of organic 'cute animal' content on the internet is dead. AI slop has killed it.

shagie 7 hours ago | parent | next [-]

(I was replying to a now deleted response)

> Slop has an upside?

Not exactly. Rather its is that places where one does want to find pictures of people's cute cats and dogs is now having additional moderation / administration burdens to try to keep the AI generated content out of those places.

It's not a "cute pictures of cats overrunning some place" but rather "even in the places where it was appropriate to post pictures of one's pets in #mypets or /r/cuteCatPics because such pictures are appropriate there (so they don't overrun other places), now people are starting fights over AI generated content."

An example that I recently encountered was someone who did an AI replacement of a cat that was "loafing" of a loaf of bread that looked like a cat. The cat picture would have been fine (with a dozen "aww" and "cute" comments in reply)... the AI cat loaf picture required moderation actions and some comment defusing over the use of AI.

8 hours ago | parent | prev [-]
[deleted]
f38 6 hours ago | parent | prev | next [-]

AI generated "cutest possible animal" (and "make it cuter") might be mildly interesting.

dev_l1x_be 8 hours ago | parent | prev | next [-]

Coming to LISP in 2038, just the right time when we hit the 2038 bug.

latchkey 8 hours ago | parent | prev [-]

Interestingly, their CSP policies forbid even an extension from inserting an img tag.

toomuchtodo 8 hours ago | parent [-]

Strong opinions strongly held.

lowbloodsugar 8 hours ago | parent | prev [-]

Is there a distinction between AI generated and AI edited?

I wanted to share some context that might be helpful: I am autistic, and I have often received feedback that my communication is snarky, rude, or tone-deaf. At work, I've found it helpful to run some of my communications through an AI tool to make my messages more accessible to non-autistic colleagues, and this approach has been working well for me.

dang 2 hours ago | parent | next [-]

userbinator put it somewhat dramatically but has the point. We'd rather hear you in your own voice, even at a cost of misunderstanding your intent sometimes. If you're using HN in good faith—and you are, because otherwise you'd not be worrying about this—then over time it's possible to learn to lessen such misunderstanding, and not only possible but well worth doing.

userbinator 4 hours ago | parent | prev [-]

You can interpret it as: We'd rather you be snarky, rude, and tone-deaf, than bland and unhuman. Your work may rather you act like a soulless corporate drone.

I_dream_of_Geni 2 hours ago | parent [-]

...except that "snarky, rude, and tone-deaf" generally gets the downvoting (flagging?) mob to come in and "phoosh".

altairprime a minute ago | parent [-]

[delayed]