| ▲ | jmull 8 hours ago | |||||||||||||||||||||||||||||||||||||
If the goal is to read what actual humans think, it's hard to see how an LLM filter can do anything but obscure and degrade the content. LLMs, as we know them, express things using the patterns they've been developed to prefer. There's a flattening, genericizing effect built in. If there are people who find an LLM filter to be an enhancement, they can run everything through their favorite LLM themselves. | ||||||||||||||||||||||||||||||||||||||
| ▲ | GMoromisato 7 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||
I think it's a spectrum: 1. I enter "Describe the C++ language" at an LLM and post the response in HN. This is obviously useless--I might as well just talk to an LLM directly. 2. I enter "Why did Stroustrup allow diamond inheritance? What scenario was he trying to solve" then I distill the response into my own words so that it's relevant to the specific post. This may or may not be insightful, but it's hardly worse than consulting Google before posting. 3. I spend a week with creating a test language with a different trade-off for multiple-inheritance. Then I ask an LLM to summarize the unique features of the language into a couple of paragraphs, and then I post that into HN. This could be a genuinely novel idea and the fact that it is summarized by an LLM does not diminish the novelty. My point is that human+LLM can sometimes be better than human alone, just as human+hammer, human+calculator, human+Wikipedia can be better than human alone. Using a tool doesn't guarantee better results, but claiming that LLMs never help seems silly at this point. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||