| ▲ | pmontra a day ago | |||||||
Fabricated citations are not errors. A pre LLM paper with fabricated citations would demonstrate will to cheat by the author. A post LLM paper with fabricated citations: same thing and if the authors attempt to defend themselves with something like, we trusted the AI, they are sloppy, probably cheaters and not very good at it. | ||||||||
| ▲ | mapmeld a day ago | parent | next [-] | |||||||
Further, if I use AI-written citations to back some claim or fact, what are the actual claims or facts based on? These started happening in law because someone writes the text and then wishes there was a source that was relevant and actually supportive of their claim. But if someone puts in the labor to check your real/extant sources, there's nothing backing it (e.g. MAHA report). | ||||||||
| ▲ | llm_nerd a day ago | parent | prev [-] | |||||||
>Fabricated citations are not errors. Interesting that you hallucinated the word "fabricated" here where I broadly talked about errors. Humans, right? Can't trust them. Firstly, just about every paper ever written in the history of papers has errors in it. Some small, some big. Most accidental, but some intentional. Sometimes people are sloppy keeping notes, transcribe a row, get a name wrong, do an offset by 1. Sometimes they just entirely make up data or findings. This is not remotely new. It has happened as long as we've had papers. Find an old, pre-LLM paper and go through the citations -- especially for a tosser target like this where there are tens of thousands of low effort papers submitted -- and you're going to find a lot of sloppy citations that are hard to rationalize. Secondly, the "hallucination" is that this particular snake-oil firm couldn't find given papers in many cases (they aren't foolish enough to think that means they were fabricated. But again, they're looking to sell a tool to rubes, so the conclusion is good enough), and in others that some of the author names are wrong. Eh. | ||||||||
| ||||||||