Remix.run Logo
aspenmayer 2 days ago

Let me pose this hypothetical:

A new user signs up and immediately starts using AI to write all of their comments because they read the guidelines, then had their AI read the guidelines, and both were convinced it was okay to continue doing so, and so they did. They told a second user this, and then a third, who decided to train their AI on the guidelines and upvoted posts, as well as Dan’s posts, your posts, my posts, and everyone else’s.

One day, Dan thinks that someone is using AI in a way that is somewhat questionable, but not against the guidelines. He makes a point to mention how using AI on HN shouldn’t be done like that, that they were holding it wrong, basically.

All of the AI’s trained on HN take notice of the conditions that led to that other AI getting reprimanded, and adjusted their behavior and output so as to not be caught.

If you squint, this is basically the status quo today. HN users who have read the guidelines and made good posts, who use AI to assist them writing posts, in a good faith way, will never receive any correction or directions, or link to the thread where Dan said not to post in such a way, because they will not get caught. And because they will not learn of the rule or get caught, in their mind, they will be in the right to do so, as they don’t know any better. Furthermore, they keep getting upvotes, so it’s smiles all around.

These so-called “good faith AI users” are only differentiated from “bad faith AI users” by being told not to use AI. If said users will only receive the instruction not to use AI after being caught doing so, AI users are incentivized to not get caught, not to not use AI altogether.

There are no upsides to not adding the AI rules to the guidelines. As it is, they are Schrödinger’s rules, existing in a superposition of both existing and not existing.

If you read Dan’s replies in the linked thread, he doesn’t specifically say AI is against the rules, and actually provides feedback that the AI user was using AI “almost right,” basically, implying that there is a right way to use AI on HN:

https://news.ycombinator.com/item?id=42224972

So the rule is not only not in the guidelines, even if you search for the rule, you won’t find it. I had to email Dan to get the rule in the first place. Do you see how absurd this situation is?