Remix.run Logo
tptacek 2 days ago

Lots of rules on HN work that way. It's a whole thing. We probably don't need to get into it here. I think it works pretty well as a system. We have a jurisprudence!

fngjdflmdflg 2 days ago | parent | next [-]

I don't think that is correct. Dang usually links directly to the guidelines and even quotes the exact guidelines being infringed upon sometimes. '"dang" "newsguidelines.html"' returns 20,909 results on algolia.[0] (Granted, not all of these are by Dang himself, I don't think you can search by user on algolia?) Some of the finer points relating to specific guidelines may no be directly written there, eg. what exactly is considered link bait or not etc., but I don't think there are any full blown rules not in the guidelines. I think the reason LLMs haven't been added is because it's a new problem and making a new rule to quickly that may have to change later will just cause more confusion.

[0] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

tptacek 2 days ago | parent [-]

No, there are several things like this that aren't explicitly in the guidelines and aren't likely ever to be. We'd get into a very long meta thread talking about what kinds of things land in the guidelines vs. in Dan's "jurisprudence" threads; some other time, maybe.

aspenmayer 2 days ago | parent | next [-]

I think it’s okay to have unwritten rules that are inferred. I am not trying to make the perfect the enemy of the good. That said, is HN best served by this status quo? Folks are genuinely arguing against the reasoning for such a rule in the first place, arguing that a rule against LLM-generated content on HN is unenforceable and so is pointless; others are likely unaware any such rule even exists in the first place; you are countering that the rule is fine, but not so fine that we add it to the guidelines.

I don’t know if this situation benefits from all of these moving parts; perhaps the finer points ought to be nailed down, considering the explicitness of the rule itself in practice.

tptacek 2 days ago | parent [-]

They're not unwritten.

aspenmayer 2 days ago | parent [-]

Let me pose this hypothetical:

A new user signs up and immediately starts using AI to write all of their comments because they read the guidelines, then had their AI read the guidelines, and both were convinced it was okay to continue doing so, and so they did. They told a second user this, and then a third, who decided to train their AI on the guidelines and upvoted posts, as well as Dan’s posts, your posts, my posts, and everyone else’s.

One day, Dan thinks that someone is using AI in a way that is somewhat questionable, but not against the guidelines. He makes a point to mention how using AI on HN shouldn’t be done like that, that they were holding it wrong, basically.

All of the AI’s trained on HN take notice of the conditions that led to that other AI getting reprimanded, and adjusted their behavior and output so as to not be caught.

If you squint, this is basically the status quo today. HN users who have read the guidelines and made good posts, who use AI to assist them writing posts, in a good faith way, will never receive any correction or directions, or link to the thread where Dan said not to post in such a way, because they will not get caught. And because they will not learn of the rule or get caught, in their mind, they will be in the right to do so, as they don’t know any better. Furthermore, they keep getting upvotes, so it’s smiles all around.

These so-called “good faith AI users” are only differentiated from “bad faith AI users” by being told not to use AI. If said users will only receive the instruction not to use AI after being caught doing so, AI users are incentivized to not get caught, not to not use AI altogether.

There are no upsides to not adding the AI rules to the guidelines. As it is, they are Schrödinger’s rules, existing in a superposition of both existing and not existing.

If you read Dan’s replies in the linked thread, he doesn’t specifically say AI is against the rules, and actually provides feedback that the AI user was using AI “almost right,” basically, implying that there is a right way to use AI on HN:

https://news.ycombinator.com/item?id=42224972

So the rule is not only not in the guidelines, even if you search for the rule, you won’t find it. I had to email Dan to get the rule in the first place. Do you see how absurd this situation is?

fngjdflmdflg 2 days ago | parent | prev [-]

Can you give some examples?

tptacek 2 days ago | parent [-]

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

Jerrrry 2 days ago | parent | prev [-]

Well said, "Better unsaid."

Shame is the best moderator.

Also, HN's miscellaneous audience of rule breakers benefit from having some rules be better off not stated. Especially this one, as it is almost as good as a "Gun-Free Zone"