| ▲ | SV_BubbleTime 2 hours ago | |
I can’t explain it well, but I think there is an asymmetric issue here… that the ability for an LLM to write a plausible email, and the ability for an LLM to detect that it’s spam are mismatched. If an LLM and make a plausible email, the best another LLM can do is to rank it as plausible. Blackbox creation and detection have to be on the same level. Perhaps if you said the detection LLM had all your context and websearch. That it could know that a Penny Pollytree at Coco Co isn’t a real person, but… that just seems like burning a ton of coal to detect fraud where the creation LLM was able to easily come up with the fictitious spam cheaply. The real story here is this will go beyond email verification. That every system we have is going to need to up its security. Paper birth certificates and social security cards and email addresses and all manner of identity is going to need new systems of auth. The challenge will be to prevent authoritarian centralization. | ||