| ▲ | detectivestory 7 hours ago | |||||||
great idea, but seems a little futile if there is no protection agains llms training on HN comments. ironically, if HN can succefully prevent llm content, it will become one of the best sources available for training data | ||||||||
| ▲ | ethin 4 hours ago | parent [-] | |||||||
Not really. Because the biggest problem with LLMs is that they can't right naturally like a human would. No matter how hard you try, their output will always, always seem too mechanical, or something about it will be unnatural, or the LLM will go to the logical extreme of your request (and somehow manage to not sound human)... The list goes on. | ||||||||
| ||||||||