▲ | nine_k 5 days ago | |||||||||||||||||||||||||||||||
I can't help but immediately think about a counteracting piece of software, which asks an LLM for variations of a paragraph, or a phrase, or a few synonyms, and types it the way a human would, with pauses, typos, navigation, rearranging pieces via copy-paste, etc. Not that your software is going to be useless. But as long as there is an incentive to cheat, new and better tools that facilitate cheating will crop up. Something else should change. | ||||||||||||||||||||||||||||||||
▲ | enjeyw 5 days ago | parent [-] | |||||||||||||||||||||||||||||||
Yeah it's a good call out. I think it's a (more) winnable battle though. For both a keystroke based AI detector, and software designed to mimic human keystroke patterns, performance will be determined by the size of the dataset they have of genuine human keystroke patterns. The detector has an inherent leg-up in this, because it's constantly collecting more data through the use of the tool, whereas the mimic software doesn't have any built in loop to collect those inputs. | ||||||||||||||||||||||||||||||||
|