Remix.run Logo
AnthonyMouse 10 hours ago

> The other is that because it's not possible to link an attestation to a particular device the only mitigation to abuse that is feasible is rate limiting

I still don't see how you can keep something anonymous and still rate limit it. If a service can tell that two requests came from the same party in order to count them then two services can tell that two requests came from the same party (by both pretending to be the same service) and therefore correlate them.

coppsilgold 10 hours ago | parent | next [-]

The way it would work with blind signatures is that the server will know the device that comes to it to request a blinded signature and will be able to rate limit how often that device asks it.

But once you get the response you can unblind the signed signature and obtain the token (which is just the unblinded signature). This token can then be used once either because its blacklisted after use (and it expires before the next day starts for example).

The desired property of blind signatures is that given a token it's information theoretically impossible to determine which blinded signature it came from (because it could have come from any of them) even if the cryptographic primitive is broken by a mathematical breakthrough or a quantum computer. There is technically the danger that if the anonymity set is too small and all the other participants collude you can be singled out.

Correlating times is a threat vector that needs to be managed either by delaying actions (not tolerable by normal users) or by acquiring tokens automatically and storing them in expectation. Or something other I haven't thought of probably. There is also a networking aspect to this, you will need a decentralized relay server network that masks origin of requests.

AnthonyMouse 9 hours ago | parent [-]

> But once you get the response you can unblind the signed signature and obtain the token (which is just the unblinded signature).

The premise of this is to keep the person issuing the tokens and the person accepting them from correlating you.

The issue is when you have more than one service accepting them. You go to use Facebook and WhatsApp but they're both Meta so you present the same unblinded signature to both services and now your Facebook and WhatsApp accounts are correlated against your will. And they have a network that does the same thing, so you go to use a third party service and they require you to submit your unblinded signature to Meta which allows them to correlate you everywhere.

coppsilgold 9 hours ago | parent [-]

> you present the same unblinded signature to both services

You would never do this as it defeats the entire purpose of using blind signatures to begin with.

AnthonyMouse 9 hours ago | parent [-]

That's the point. You go to example.com and get the "sign in with Google" box as the only login option, but now you can't have separate uncorrelated Google accounts. Or if browsers do it automatically then every site does a background load or redirect through adtracker.nsa so you're presenting the same token on every service.

It's not the user who wants any of this to begin with. "You would never do that" except that it's now the only way to be let into the service.

nullc 9 hours ago | parent | prev [-]

Just to give an example to prime your intuition: define your "usage token" as H(private_key|service_domain_name|date|4-bit_counter). Make your scheme provably reveal the usage token when you authenticate. Now you can use the service 16 times a day on a particular domain and no more simply by blocking token reuse. And yet the service has no ability to link different tokens to each other or to a specific person because they don't have anyone elses private keys.

You can make variations on this for a wide spectrum of rate limiting behaviors.

But also I agree with xinayder's comment-- the anticompetative, anti-privacy, invasive surveillance is unacceptable. There is a lot of risks with ZKP's that we just make the poison a little less bitter with the end result being more harm to humanity.

I think ZKP systems are intellectually interesting and their lack of use helps make it more clear that the surveillance is really the point of these schemes, not security because most of the security (or more of it) could be achieved without most of the surveillance.

But allowing the apple google duoopoly to control who can read online is wrong even if they did it in a way that better preserved privacy.

And because I can't believe no one else in the thread has linked to it: https://www.gnu.org/philosophy/right-to-read.html

AnthonyMouse 9 hours ago | parent [-]

> define your "usage token" as H(private_key|service_domain_name|date|4-bit_counter)

But how are you preventing multiple services from using the same value for service_domain_name because they're cooperating to correlate your use?

nullc 8 hours ago | parent [-]

Because-- in this hypothetical-- your user agent restricts the usage to the name displayed on the screen and also because your agent won't send the same value twice either (it'll increment the counter or tell you that its run out of tokens).

AnthonyMouse 8 hours ago | parent [-]

Requiring the name to be displayed isn't going to do much for ordinary people. They mostly wouldn't look at it and even if they did, "continue as-is or no service for you" means they continue as-is.

Not sending the same value twice would prevent them from being correlated, but now what are you supposed to do when you run out? Running you out could even be the goal: You burn a token to get a cookie and now you can't clear your cookies or you'll be denied a new one since you're out of tokens.

nullc 8 hours ago | parent [-]

I'll be the first to admit that the technology can be abused-- that it's even ripe for abuse. That sort of problem can be avoided by allowing 'enough'-- and if the goal is to just prevent a site being flooded out 'enough' could be pretty high.

Of course, I think the effective purpose of google's attest feature is to invade everyone's privacy which we should assume is part of why they don't use privacy preserving techniques. Privacy preserving techniques could still be abused, however.

Maybe they're even worse for humanity because they make bad schemes more palatable. I think right now I lean towards no: the public in general will currently tolerate the most invasive forms of these systems, so our issue isn't that they're being successfully resisted and the resistance might be diminished by a scheme which is still bad but less bad.