| ▲ | tempestn 3 hours ago | |
I think you're missing their point. The question you're replying to is, how do we know that this made up content is a hallucination. Ie., as opposed to being made up by a human. I think it's fairly obvious via Occam's Razor, but still, they're not claiming the quotes could be legit. | ||
| ▲ | DonHopkins 6 minutes ago | parent [-] | |
The point is they keep making excuses for not reading the primary source, and are using performative skepticism as a substitute for basic due diligence. You don’t need a metaphysics seminar to evaluate this. The person being quoted showed up and said the quotes attributed to him are fake and not in the linked source: https://infosec.exchange/@mttaggart/116065340523529645 >Scott Shambaugh here. None of the quotes you attribute to me in the second half of the article are accurate, and do not exist at the source you link. It appears that they themselves are AI hallucinations. The irony here is fantastic. So stop retreating into “maybe it was something else” while refusing to read what you’re commenting on. Whether the fabrication came from an LLM or a human is not your get-out-of-reading-free card — the failure is that fabricated quotes were published and attributed to a real person. Please don’t comment again until you’ve read the original post and checked the archived Ars piece against the source it claims to quote. If you’re not willing to do that bare minimum, then you’re not being skeptical — you’re just being lazy on purpose. | ||