▲ | klabb3 3 days ago | |||||||||||||||||||||||||||||||
Before AI, you needed to trust the recipient and the provider (Gmail, Signal, WhatsApp, discord). You could at least make educated guesses about both for the risk profile. Such as: if someone leaks the code to this repo, it’s likely a collaborator or GitHub. Today, you invite someone to a private repo and the code gets exfiltrated by a collaborator running whatever AI tool simply by opening their IDE. Or you send someone an e2ee message on Signal but their AI reads the screen/text to summarize and now that message is exfiltrated. Yes, I know it’s ”nothing new” ”in principle this could happen because you don’t control the client”. But opsec is also about what happens when well-meaning participants being accomplices in data collection. I used to trust that my friends enough to not share our conversations. Now the default assumption is that text & media on even private messaging will be harvested. Personally I’m not ever giving keys to the kingdom to a remote data-hungry company, no matter how reputable. I’ll reconsider when local or self-hosted AI is available. | ||||||||||||||||||||||||||||||||
▲ | JumpCrisscross 2 days ago | parent | next [-] | |||||||||||||||||||||||||||||||
> used to trust that my friends enough to not share our conversations. Now the default assumption is that text & media on even private messaging will be harvested I would seriously reëvaluate my trust level in a friend or colleague who installs a non-ADA screen reader on their phone. At least to the level of sharing anything sensitive. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | 2 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||
[deleted] |