| ▲ | ghywertelling 6 hours ago | |
If LLMs can identify a person across websites, I can ask LLM to read up his posts and write like him impersonating him and then this feeds back into the tools identifying him. I can probabilistically malign a person this way. | ||
| ▲ | JohnMakin 6 hours ago | parent | next [-] | |
This already is a thing people did at least as far back as I started getting into web privacy, which was ~10 years ago. I have been the target of it before. LLM's are probably better at it, but I don't know if this is as destructive as people may guess it would be. Probably highly person dependent. The micro-signals this paper discusses are more difficult to fake. | ||
| ▲ | john_strinlai 6 hours ago | parent | prev | next [-] | |
stylometry is only one aspect of de-anonymization. what you describe is certainly a threat that we will have to deal with, but there is a lot more to credible impersonation than just being able to mimic a writing style | ||
| ▲ | functionmouse 6 hours ago | parent | prev | next [-] | |
So this means deanonymization doesn't work? Rejoice? | ||
| ▲ | Jerrrrrrrry 6 hours ago | parent | prev [-] | |
How to conduct a psy-op | ||