| ▲ | GMoromisato 7 hours ago | ||||||||||||||||||||||||||||||||||
But what if it turns out that human+LLM can produce more "thoughtful, curious discussion" than human alone? That's the dichotomy: Do we prefer text with the right "provenance" over higher quality text? [Perhaps you'll say that human+LLM text will never be as high-quality as human alone. But I'm pretty sure we've seen that movie before and we know how it ends.] That said, you're right that because human+LLM is so much more efficient, we'll be drowning in material--and the average quality might even go down, even if the absolute quantity of high-quality content goes up. I think, in the long term, we will have to come up with more sophisticated criteria for posting rather than just "must be unenhanced human". | |||||||||||||||||||||||||||||||||||
| ▲ | Avicebron 7 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
I think "must be unenhanced human" is probably the most sophisticated criteria even if it's simple. I don't think there's much value in trying to optimize the perfect "thoughtful, curious discussion", why would there be, it implies some ideal state for "thoughtful and curious" vs the reality that discussions between living breathing people is interesting by default as long as folks loosely follow some guidelines. | |||||||||||||||||||||||||||||||||||
| ▲ | altairprime 7 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
> what if it turns out that HN need not offer itself up as a Petri dish for AI writing experimentation. There are startups in that space, and at least one must be YC-funded, statistically speaking. Come back with the outcomes of the experiment you describe and make a case that they should change the rule. Maybe they will! As of today, though, they are apparently unconvinced. > the average quality might even go down We have a recent concrete analysis of Show HN indicating support for this possibility, resulting in the mods banning new users for posting to Show HN (something they’ve probably been resisting for close to twenty years, I imagine, given how frequent a spam vector that must be). > Perhaps you’ll say that human+LLM text will never be as high-quality as human alone Please don’t put words in my mouth, insinuating the tone my reply before I’ve made it, and then use that rhetorical device to introduce a flamebait tangent to discredit me with. I’ve made no claims about future capabilities here and I’m not going to address this irrelevance further. > in the long term, we will have to come up with more sophisticated criteria Our current criteria seem sophisticated already. Perhaps you could make a case that AI-assisted writing helps avoid guideline violations. This one tends to be especially difficult for us all today: ”Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith. Eschew flamebait. Avoid generic tangents.” | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | davebranton 7 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
It doesn't matter. The guidelines are perfectly clear, no matter the outcome of your thought experiment. Hacker News wants intelligent conversation between human beings, and that's the beginning and the end of it. If you want LLM-enhanced conversation then I'm sure you will find places to have that desire met, and then some. Hacker News is not that place, and I pray that it will never become that place. In short, and in answer to "Do we prefer text with the right "provenance" over higher quality text?". Yes. Yes, we do. | |||||||||||||||||||||||||||||||||||