| ▲ | BeetleB 3 hours ago | |
> The majority of the cases where I would say "according to AI, <blah>" are where <blah> actually does cite sources that I feel appear plausible. Sometimes they're links, sometimes they would be other publications not necessarily a click away. Sometimes I could independently verify them by spending half an hour researching (which is), sometimes I can't. In my experience, LLMs hallucinate citations like crazy. Over 50% of the times I've checked, the citation either didn't exist, or it did but didn't support the LLM's assertions. This is true not just from the chat, but for Google AI summaries. When the references are more often wrong than not, you can understand why many will simply downvote you for bringing LLM citations into the conversation. Why quote a habitual liar? (If you look at my other comments, I'm actually in favor of using LLMs in some capacity for HN comments. Just not in this case.) | ||
| ▲ | dataflow 2 hours ago | parent [-] | |
>> actually does cite sources that I feel appear plausible. > In my experience, LLMs hallucinate citations like crazy. Over 50% of the times I've checked, the citation either didn't exist, or it did but didn't support the LLM's assertions. Note that those are specifically not the cases where the AI is citing "sources that I feel appear plausible." (I also don't find over 50% hallucination to be accurate for Google AI summaries in my experience, but that depends on your queries, and in any case, I digress...) > When the references are more often wrong than not, you can understand why many will simply downvote you for bringing LLM citations into the conversation. Why quote a habitual liar? To be clear, I do understand both sides of the argument, and I don't think either side is unreasonable. I've also had the experience of being on both sides of this myself, and I don't think there's a clear-cut answer. I'm just hoping to get clarity on what the new policy is as far as this goes. I'm sure it'll be reevaluated either way as time goes on. | ||