Asking LLM for thought processes only generate hallucinations. Spotting AI images are same. Those subtleties are justifications ex post facto, not necessarily the actual cues that trip BS detectors.