| ▲ | with 4 hours ago | |||||||
nobody's asking who profits from false positives. these AI detection vendors have a direct financial incentive to flag aggressively. more flags = "more value" = more school contracts renewed. same playbook as selling antivirus to your grandma. sell fear, charge per seat, and make the false positive rate someone else's problem. | ||||||||
| ▲ | ipcress_file 4 hours ago | parent [-] | |||||||
Do you have any evidence to back this up or is it speculative? My institution subscribes to TurnItIn's AI detector. The documentation is quite clear that the system is tuned in a manner that produces a significant number of false negatives and minimizes false positives. They also state that they don't report anything under "20% AI-generated" content. So the marketing I've seen is intended to reassure skittish administrators that the software is not going to generate false accusations. That being said, I have no idea whether the marketing claims are true. The software is a black box. | ||||||||
| ||||||||