Remix.run Logo
LinuxBender a day ago

Could a possible solution there be to use the same language detection platforms used for detecting terrorist activity to also flag possible grooming for human moderator review? Or might that be too subjective for current language models leading to many false positives?