| ▲ | sureMan6 2 days ago | |||||||||||||||||||||||||||||||
The pro LLM rant is weird, LLMs "hallucinate" in creating detailed elaborate lies, the frontier models still do this egregiously, an LLM written article by default has 0 value since every single line could be true or it could be a convincingly crafted lie, every line has to be fact checked I'm using Gemini 3.1 pro to help me research my thesis, it still with search enabled and on pro mode, invents entire papers that don't exist, and lies about the contents of existing papers to relate them to the context or to appease me, if I submitted an LLM written article based on the results its given me 80% of the article would be lies Commenting to complain that the article is LLM written is helpful too since some people aren't able to distinguish | ||||||||||||||||||||||||||||||||
| ▲ | 0xbadcafebee 2 days ago | parent | next [-] | |||||||||||||||||||||||||||||||
> an LLM written article by default has 0 value since every single line could be true or it could be a convincingly crafted lie, every line has to be fact checked The exact same thing is true of Human speech. You have no idea if anything a human says is true until you fact check it. But you don't fact check everything every person says, do you? So what do you do instead? You use heuristics. Simple - and quite flawed - subconscious rules to stop worrying about things. You find a person you like, and you classify them "trustworthy", and believe almost all of what they say, not considering if any of it might be false. But of course, humans are fallible, and many of them receive "poisoned" input, and even hallucinate (making up information). They then spread that false information around. Yes, even the people you trust. And when you're faced with something untrue, said by someone you trust, you rationalize it. "Oh, they just made a mistake." And you completely ignore that the person you trust told you a falsehood. Life is hard enough without having to question if everything we hear is false. So we just accept falsehoods from some people, and not others. LLMs are likely more factual and knowledgeable today than humans are, thanks to their constant improvements via reinforcement. They're going to keep getting better too. But they'll never be perfect. Rather than rejecting anything they produce, my suggestion would be to do what you do with humans: trust them a little, verify big things, let the little things go, accept that there will be errors, and move on with life. | ||||||||||||||||||||||||||||||||
| ▲ | WarmWash 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
If you are asking an LLM to cite it's sources you are wasting your time and degrading the quality of the response. LLMs have no inherent mechanism for "knowledge source tracking", because that isn't at all how they work. We're trying to get there with agentic stacks, but it's still too new. For sparse knowledge tasks, where you know that the model can't possibly have much training because even humans themselves don't have much knowledge there, use it as a brainstorming partner, not as a source. Or put relevant papers in it's context to help you eval those papers in relation to your work. But it's just going to hurt itself in confusion trying to tie fuzzy ideas to sparse sources embedded in pages upon pages of mildly related google search results. | ||||||||||||||||||||||||||||||||
| ▲ | kevin42 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
If they can't distinguish LLM text, then why should they care? Anti-AI people like to bring up hallucination as if everything AI generates is false. I can write pages of text, with my own content, and then use AI to improve my writing and clarity. Then I review and edit. It might have some LLM markers in there, which I remove sometimes because it's distracting. But the final, AI assisted writing is easier to read and better organized. But all the ideas are mine. Hallucinations are not remotely a problem in this case. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | halJordan 2 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||
No, you're being weird (why are you calling people weird anyway, not helpful). You're complaining about facts that have been true since words have been written on paper. If you read the article with the same criticality you read any other article you wont have the problem you complain about. The reality is, you're only complaining because you hate ai. Cool, but dont dress it up and resort to name calling to browbeat the other guy | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||