Remix.run Logo
thecatapps 12 hours ago

With all of the discourse around hardware attestation, digital ID, and age verification in recent weeks/months, is there actually any good solution to the problems these existing tools (Privacy Pass, WEI, Fraud Defense, uploading IDs) claim to solve? Are there open and privacy-preserving standards that can solve the problem of bots and minors? If not, what would be required to establish one, and is it realistic?

Businesses will do what businesses will do, but it seems to me having something to point to and saying "do this instead" is more effective than "this sucks and isn't even about security, don't do this at all" even though it's true.

krupan 10 hours ago | parent | next [-]

What even is the problem? I keep my kids computers in the living room where it's easy to see what they are doing. Their lan shuts down at night when I'm asleep. They don't get full control of their own cell phone until they are around 16-years old. Bots on social media discourage me from using it which is a Good Thing if you ask me.

SchemaLoad 8 hours ago | parent | next [-]

The problem is that companies have a legitimate reason to want to block AI agents and verify the users are actually real. And it's incredibly difficult to do that when the old methods of clicking on squares or reading blurry words don't work anymore.

Solving proof of humanity is very difficult without tying to some kind of difficult to replicate or automate ID.

3 hours ago | parent [-]
[deleted]
ezfe 5 hours ago | parent | prev [-]

> Bots on social media

... are not problems, no - but bots in general are

xinayder 10 hours ago | parent | prev [-]

> Are there open and privacy-preserving standards that can solve the problem of bots and minors? If not, what would be required to establish one, and is it realistic?

Ideally there shouldn't be standards for this. What we have already is enough.

Companies claiming they are closing down their services/devices to protect the users is total BS. Facebook has admitted they get 10% of their ad revenue from scams, and that's the reason they won't go after scammers on their platforms.

Same can be said for Google. They could come up with numerous ways to block bots or make captchas harder for actual bots (while also not flagging every non-Chrome user as a potential bot, like they do nowadays), but they pretend this is an unsolvable problem that requires a nuclear solution, it used to be Web DRM but now it's called Fraud Defense.