| ▲ | coppsilgold 13 hours ago |
| Requiring authorized silicon (and software) isn't even the biggest problem here. They do not use zero knowledge proof systems or blind signatures. So every time you use your device to attest you leave behind something (the attestation packet) that can be used to link the action to your device. They put on a show about how much they care about your privacy by introducing indirection into the process (static device 'ID' is used to acquire an ephemeral 'ID' from an intermediate server) but it's just a show because you don't know what those intermediary severs are doing: You should assume they log everything. And this just the remote attestation vector, the DRM 'ID' vector is even worse (no meaningful indirection, every license server has access to your burned-in-silicon static identity). And the Google account vector is what it is. Using blind signatures for remote attestation has actually been proposed, but no one notable is currently using it: <https://en.wikipedia.org/wiki/Direct_Anonymous_Attestation> There are several possible reasons for this, the obvious one is that they want to be able to violate your privacy at will or are mandated to have the capability. The other is that because it's not possible to link an attestation to a particular device the only mitigation to abuse that is feasible is rate limiting which may not be good enough for them - an adversary could set up a farm where every device generates $/hour from providing remote attestations to 'malicious' actors. |
|
| ▲ | AnthonyMouse 11 hours ago | parent | next [-] |
| > The other is that because it's not possible to link an attestation to a particular device the only mitigation to abuse that is feasible is rate limiting I still don't see how you can keep something anonymous and still rate limit it. If a service can tell that two requests came from the same party in order to count them then two services can tell that two requests came from the same party (by both pretending to be the same service) and therefore correlate them. |
| |
| ▲ | coppsilgold 11 hours ago | parent | next [-] | | The way it would work with blind signatures is that the server will know the device that comes to it to request a blinded signature and will be able to rate limit how often that device asks it. But once you get the response you can unblind the signed signature and obtain the token (which is just the unblinded signature). This token can then be used once either because its blacklisted after use (and it expires before the next day starts for example). The desired property of blind signatures is that given a token it's information theoretically impossible to determine which blinded signature it came from (because it could have come from any of them) even if the cryptographic primitive is broken by a mathematical breakthrough or a quantum computer. There is technically the danger that if the anonymity set is too small and all the other participants collude you can be singled out. Correlating times is a threat vector that needs to be managed either by delaying actions (not tolerable by normal users) or by acquiring tokens automatically and storing them in expectation. Or something other I haven't thought of probably. There is also a networking aspect to this, you will need a decentralized relay server network that masks origin of requests. | | |
| ▲ | AnthonyMouse 10 hours ago | parent [-] | | > But once you get the response you can unblind the signed signature and obtain the token (which is just the unblinded signature). The premise of this is to keep the person issuing the tokens and the person accepting them from correlating you. The issue is when you have more than one service accepting them. You go to use Facebook and WhatsApp but they're both Meta so you present the same unblinded signature to both services and now your Facebook and WhatsApp accounts are correlated against your will. And they have a network that does the same thing, so you go to use a third party service and they require you to submit your unblinded signature to Meta which allows them to correlate you everywhere. | | |
| ▲ | coppsilgold 10 hours ago | parent [-] | | > you present the same unblinded signature to both services You would never do this as it defeats the entire purpose of using blind signatures to begin with. | | |
| ▲ | AnthonyMouse 9 hours ago | parent [-] | | That's the point. You go to example.com and get the "sign in with Google" box as the only login option, but now you can't have separate uncorrelated Google accounts. Or if browsers do it automatically then every site does a background load or redirect through adtracker.nsa so you're presenting the same token on every service. It's not the user who wants any of this to begin with. "You would never do that" except that it's now the only way to be let into the service. |
|
|
| |
| ▲ | nullc 10 hours ago | parent | prev [-] | | Just to give an example to prime your intuition: define your "usage token" as H(private_key|service_domain_name|date|4-bit_counter). Make your scheme provably reveal the usage token when you authenticate. Now you can use the service 16 times a day on a particular domain and no more simply by blocking token reuse. And yet the service has no ability to link different tokens to each other or to a specific person because they don't have anyone elses private keys. You can make variations on this for a wide spectrum of rate limiting behaviors. But also I agree with xinayder's comment-- the anticompetative, anti-privacy, invasive surveillance is unacceptable. There is a lot of risks with ZKP's that we just make the poison a little less bitter with the end result being more harm to humanity. I think ZKP systems are intellectually interesting and their lack of use helps make it more clear that the surveillance is really the point of these schemes, not security because most of the security (or more of it) could be achieved without most of the surveillance. But allowing the apple google duoopoly to control who can read online is wrong even if they did it in a way that better preserved privacy. And because I can't believe no one else in the thread has linked to it: https://www.gnu.org/philosophy/right-to-read.html | | |
| ▲ | AnthonyMouse 9 hours ago | parent [-] | | > define your "usage token" as H(private_key|service_domain_name|date|4-bit_counter) But how are you preventing multiple services from using the same value for service_domain_name because they're cooperating to correlate your use? | | |
| ▲ | nullc 9 hours ago | parent [-] | | Because-- in this hypothetical-- your user agent restricts the usage to the name displayed on the screen and also because your agent won't send the same value twice either (it'll increment the counter or tell you that its run out of tokens). | | |
| ▲ | AnthonyMouse 9 hours ago | parent [-] | | Requiring the name to be displayed isn't going to do much for ordinary people. They mostly wouldn't look at it and even if they did, "continue as-is or no service for you" means they continue as-is. Not sending the same value twice would prevent them from being correlated, but now what are you supposed to do when you run out? Running you out could even be the goal: You burn a token to get a cookie and now you can't clear your cookies or you'll be denied a new one since you're out of tokens. | | |
| ▲ | nullc 9 hours ago | parent [-] | | I'll be the first to admit that the technology can be abused-- that it's even ripe for abuse. That sort of problem can be avoided by allowing 'enough'-- and if the goal is to just prevent a site being flooded out 'enough' could be pretty high. Of course, I think the effective purpose of google's attest feature is to invade everyone's privacy which we should assume is part of why they don't use privacy preserving techniques. Privacy preserving techniques could still be abused, however. Maybe they're even worse for humanity because they make bad schemes more palatable. I think right now I lean towards no: the public in general will currently tolerate the most invasive forms of these systems, so our issue isn't that they're being successfully resisted and the resistance might be diminished by a scheme which is still bad but less bad. |
|
|
|
|
|
|
| ▲ | zx8080 6 hours ago | parent | prev | next [-] |
| > Requiring authorized silicon (and software) isn't even the biggest problem here.
It is indeed the biggest issue. It prevents be from owning and using the hardware I pay for, own, or make myself. It's switching the personal computers as we know it from being open to proprietary and owned by 2 large US corporations. I don't agree that it's not a problem. |
| |
| ▲ | brabel 2 hours ago | parent [-] | | Did you just read “not even the biggest problem” as “not a problem”? | | |
|
|
| ▲ | xinayder 11 hours ago | parent | prev | next [-] |
| Can we stop normalizing being surveilled online and on our devices? Saying something like "the problem is not hardware attestation, but that they don't use ZKP". You are normalizing the new behavior. You shouldn't. It doesn't matter if they use ZKP or the latest, secure technology for hardware attestation. The issue is hardware attestation. It's the same with age ID. The issue is not that Age ID is prone to data leaks, the problem itself is called Age ID. |
| |
| ▲ | userbinator 11 hours ago | parent | next [-] | | Hell yes. I was going to post the same comment. I don't give a flying fuck how it's implemented. Remote attestation is inherently evil. I remember the WEI apologists trying to do the same thing to derail the argument. The problem is the goal, not the details. Just say no: DO NOT WANT! | | |
| ▲ | lxgr 11 hours ago | parent | next [-] | | Remote attestation is a technology, not a policy or a political effort, so it can't be inherently evil. You can disagree with all its known or proposed uses, but then I think it makes more sense to name these. | | |
| ▲ | xinayder 11 hours ago | parent | next [-] | | DRM is a technology and is inherently evil.
Web attestation is DRM for the web, and is inherently evil.
Age ID is a technology and is inherently evil. We have over 30 years of the world wide web and for these more than 3 decades this was never a problem. Suddenly, we "need" to create new technology that seem to be security features, but are essentially just being used for evil, thus being inherently bad. It's not like these technologies were created for the greater good and misappropriated by bad actors. They were proposed by bad actors in the first place, they cannot not be inherently good. | | |
| ▲ | lxgr 10 hours ago | parent | next [-] | | DRM is arguably a specific use of various generic technology ranging from whitebox cryptography to trusted computing. I don't think remote attestation (or even more so its umbrella technology, trusted computing) is nearly as specifically targeted as DRM. > We have over 30 years of the world wide web and for these more than 3 decades this was never a problem. Suddenly, we "need" to create new technology that seem to be security features, but are essentially just being used for evil, thus being inherently bad. I agree that requiring remote attestation for generic web use is evil. It's way too heavy-handed an approach better reserved I still don't think this somehow outright disqualifies the technology itself. | |
| ▲ | charcircuit 8 hours ago | parent | prev [-] | | >We have over 30 years of the world wide web and for these more than 3 decades this was never a problem. Are you seriously trying to suggest copyright infringement has not been an issue over the last 30 years? Both of them are solutions to problems that we've had over the last 30 years and were created for the greater good to solve problems that developers were facing. | | |
| ▲ | lisabytes 16 minutes ago | parent | next [-] | | Movies, games and music are multi billion dollar industries, in what way have they struggled in a world of endless piracy being possible? | |
| ▲ | xinayder an hour ago | parent | prev [-] | | Tell me when DMCA law has worked in favor of small companies/developers? DMCA is abused every. single. time. |
|
| |
| ▲ | pigeons 4 hours ago | parent | prev | next [-] | | I think people are too quick to dismiss the possibility that some technologies are just bad and harmful and we can't shrug off responsibility and say I'm just making a neutral technology and the people using it are the ones causing harm. | |
| ▲ | userbinator 11 hours ago | parent | prev | next [-] | | Then explain why RA was invented? It is inherently against user freedom, just like "secure" boot and the rest of the corporate-authoritarian crap. People have woken up to the truth as the pieces come together. This article from 2022 is fun to look at and see how prescient it was: https://news.ycombinator.com/item?id=29859106 | | |
| ▲ | MadnessASAP 3 hours ago | parent [-] | | I have 2 servers, Alice and Bob, Bob has a secret, I want Bob to be able to share that secret with Alice. However, I want Alice to be able to prove to Bob that it is actually Alice, that it is running the correct AliceOS, and that AliceOS was loaded on bare metal Alice without nefarious pre-book or virtualization hooks. A TPM with measured boot (SecureBoot) does exactly this, remote attestation is how Alice proves to Bob that it is in a trusted configuration and wasn't tampered with. | | |
| ▲ | xinayder 43 minutes ago | parent | next [-] | | And exactly how many Linux distros support Secure Boot out of the box? Just a few. I can perhaps agree that the idea of SB can be good, but it was designed (and is used) in a bad way. Just look at how many distros do not support SB. | |
| ▲ | brabel 2 hours ago | parent | prev | next [-] | | As someone who wanted to improve users security, that’s exactly why I find this thread fanatical opposition to attestation baffling. Nearly everyone uses a device that supports hardware attestation. It’s the best available tool to protect users from malware. We do implement a fallback that lowers security but lets the few users who have devices not able to attest properly to continue, but that really lowers security since we can’t even know if the device cryptography is itself compromised and hence can’t really trust anything it sends. If you have a different solution, do share it! I would love to use something you guys don’t find abhorrent! But until then I don’t really see the reason for all this negativity. | | |
| ▲ | MadnessASAP an hour ago | parent [-] | | Sadly, the problem isn't the TPM or Remote Attestation. It's Google et al choosing to only talk to devices and software they like without concern for what the user wants or trusts. Compounded by everyone else just going along with it. A TPM where the device owner can't take ownership of the root key is worse then no TPM at all. |
| |
| ▲ | userbinator 2 hours ago | parent | prev [-] | | That's the academic viewpoint, but in practice it's used for far more hostile purposes. (One argues that since you own both of them, you should simply set up the two servers yourself with a key of your own choosing, asymmetric or otherwise, and then restrict physical access to them.) |
|
| |
| ▲ | nullc 10 hours ago | parent | prev [-] | | "It’s a poor atom blaster that won’t point both ways." |
| |
| ▲ | zx8080 10 hours ago | parent | prev [-] | | The biggest problem is banking system. "Don't want - no bank for you". That's the problem. | | |
| ▲ | Hackbraten 4 hours ago | parent [-] | | Let them know. Write a letter to the CEO. And vote with your wallet and switch banks if you can. There's always a bank willing to offer you a non-app 2FA scheme. | | |
| ▲ | gorgolo 2 hours ago | parent | next [-] | | Banks don’t do this because of profit. They do it because of decades of laws pushing in this direction. Anti-money laundering, know your customer, digitalised currency, abandoning cash, preventing tax evasion etc… it’s been getting more extensive over time. | | |
| ▲ | Hackbraten an hour ago | parent [-] | | None of the things you mentioned inherently require the user to own (and babysit) an expensive general-purpose computing device produced by tracking-obsessed adtech giants and with software obsolescence built into the product. |
| |
| ▲ | brabel 2 hours ago | parent | prev [-] | | Do you think banks are using attestation gratuitously? It helps prevent a lot of fraud. You are opposing something that saves people’s savings every day just because you think it takes “freedom” away from a few hobbyists. Do you even have a phone that does not support hardware attestation or is all this posturing about something hypothetical? | | |
| ▲ | xinayder 40 minutes ago | parent | next [-] | | Can you show me examples where locking down an OS has prevented fraud in banking? Honestly, if the only way to secure your banking system is by locking down users' devices, there is something really bad going on at your end, security-wise. Your system should be secure even without locking down user hardware. | | |
| ▲ | Hackbraten 27 minutes ago | parent [-] | | One of the threat models is that a fraudster tricks a non-technical user into installing malware, which then manipulates the user interface so that next time the user tries to send money to Bob, it actually goes to Mallory.
That's a legitimate concern, and one of the causes why PSD2 mandates that all 2FA devices must have a display that shows the user where they're about to send the money and how much. |
| |
| ▲ | Hackbraten an hour ago | parent | prev [-] | | > Do you think banks are using attestation gratuitously? What I'm claiming is that banks have the freedom of offering their customers 2FA other than smartphone apps. > Do you even have a phone that does not support hardware attestation or is all this posturing about something hypothetical? All the phones I own, including my daily driver, run some flavor of Debian. None of them support hardware attestation. I'm in Europe, bound by PSD2, and own a couple of cheap, certified chip-and-TAN devices so I can do banking. |
|
|
|
| |
| ▲ | altairprime 7 hours ago | parent | prev | next [-] | | How should a government act to prohibit misrepresentation of one’s characteristics online, from accessing services for which that government has formally defined regulations based on characteristic into law? If your answer is “they shouldn’t ever do that”, then you’re promoting an uncompromising position that governments are disinclined to adopt, being the primary user of identity issuance and verification on behalf of their citizens. If your answer is “they should do that differently”, then you have a discussion about (for example) ZKP or biosigs or etc., such as the thread you’re replying to. Which of these two paths are you here to discuss? I want to be sure I’ve correctly understood you to be arguing for the former in a thread about the latter. | |
| ▲ | lxgr 11 hours ago | parent | prev | next [-] | | You're not necessarily being surveiled just because you're forced to authenticate yourself. It often is the case practically, but it's not inherent, and mixing the two up makes the discussion too imprecise in a technical forum. Hardware attestation often also has problems of centralization, but that's something else as well. By just labeling it as an abstract bad thing without seeing nuance, I'm afraid you won't be convincing those in power to pass or block these laws, or those convincing your fellow voters which efforts to support. | | |
| ▲ | xphos 7 hours ago | parent | next [-] | | I think labeling this an abstract problem because all the existing implementations as having concrete but different problems is a little bit of a Motte and Bailey fallacy. The surveillance of the future will be powered by the things we produce today. If the accepted algorithms leave cookies those cookies will be used tracked and monitized. The bad argument is the forced verification to do things on the internet. Making that start at the hardware is a lock in thats not okay. Business will always own the services and making standards that trade our practical liberty for the sake of security is a very compromised position in my opinion. And it does start with the age verification, followed by id checks, etc. Its compromising precisely because no lines are drawn and no rights to privacy are codified in law. Without guiderails the worse path will likely be taken for maximum profit | |
| ▲ | zx8080 10 hours ago | parent | prev | next [-] | | > You're not necessarily being surveiled just because you're forced to authenticate yourself. Oh hell you do! Google profit comes from ADS! It's for their profit to surveil and track and deanonymize TO SELL ADS. | | |
| ▲ | whattheheckheck 3 hours ago | parent [-] | | Having thought about ads, what is the ideal feedback info channel loop from manufacturers to consumers? How best to distribute the information of who can manufacture what at what cost/price and what does it do and when is it appropriate for consumers to receive or pull info from where? And if it ends up being a monopoly of 1 centralized system how do you allow for a competitor to break through without ads? |
| |
| ▲ | bigyabai 10 hours ago | parent | prev | next [-] | | > It often is the case practically, but it's not inherent Oh my god. It's 2026, and we're still repeating the "I trust Apple/Google/Microsoft enough to resist the government" spiel. Hardware attestation is a surveillance mechanism. If China was enforcing the same rule, you would immediately identify it as a state-driven deanonymization effort. But when the US does it, you backpedal and suggest that it could be implemented safely in a hypothetical alternate reality. Do you want to live in a dystopia? | | |
| ▲ | lxgr 10 hours ago | parent [-] | | > Oh my god. It's 2026, and we're still repeating the "I trust Apple/Google/Microsoft enough to resist the government" spiel. Who is? > But when the US does it [...] I don't live in the US, and while US is often setting global trends, in this case I don't think that's actually that likely, unless it somehow goes significantly better (i.e., the benefits actually vastly exceed the collateral damage to anonymity and resiliency via heterogeneity) than expected. |
| |
| ▲ | xinayder 11 hours ago | parent | prev [-] | | Those in power who need convincing are the same ones pushing for mass surveillance online. |
| |
| ▲ | coppsilgold 11 hours ago | parent | prev [-] | | There is a problem where it's becoming increasingly harder to determine which internet packets that are coming to your service are at the behest of a human in the course of normal activities or an automated program. If all the internet was is static content, that wouldn't be much of a problem. But we live in world where packets coming to your service result in significant state changes to your database (such as user generated content). I suspect that we are currently in the valley of do-something-about-it on the graph which is why you see all this angst from the big players. Would Google really care if automated programs were so good that they were approximating real humans to such an extent that absolutely no one can tell? I suspect they would not only be happy with such a state of affairs, they would join in. | | |
| ▲ | userbinator 11 hours ago | parent [-] | | That's not a problem at all. It's an artificially created distraction, created to manufacture consent, by those pushing for this shit. |
|
|
|
| ▲ | Hoodedcrow 13 hours ago | parent | prev | next [-] |
| Would like to read a writeup on this, I was certain it was going to be something like this from the app's announcement. Also I recall a discussion on Graphene's forums that DRM ID is not only retained there, but stays the same across profiles. |
| |
| ▲ | coppsilgold 13 hours ago | parent [-] | | I simplified the process in my description. The DRM ID Android has is not what I was referring to. I was referring to the static private key that is stored in the silicon. At any time an application can initiate a license request process using DRM APIs which will elicit an unchangeable HWID from your device. The only protection is that it will be encrypted for an authorized license server private key so collusion may be required (intel agencies almost certainly sourced 'authorized' private keys for themselves). Google or Apple also has the option to authorize keys for themselves. In 'theory' all such keys should be stored in "trusted execution environments" on license servers and not divulge client identities for whatever that's worth: <https://tee.fail>. | | |
| ▲ | comex 5 hours ago | parent [-] | | Citation? | | |
| ▲ | coppsilgold 5 hours ago | parent [-] | | Content Decryption Module (CDM) in your browser or Mobile SDK generates the license challenge
<https://go.buydrm.com/thedrmblog/the-anatomy-of-a-multi-drm-...>The "license challenge" (it might be a mistake I think it's supposed to be a license request) is just a packet (that can be saved and later sent to anywhere) and it contains the encrypted certificate which doubles as your HWID. An adversary needs to control the private key of the license "server" the challenge is for (this is a privacy measure introduced to prevent the CDM from offering the HWID to anyone who wants it). Now if you want the HWID you need to work for it (one time) by stealing a private key, bribing/blackmailing employees or issuing secret edicts ("here is a new license server we need a certificate for"). Working for Hollywood is also an option I suppose. Pirates sacrifice devices when they publish ripped content due to the certificate being revoked after Hollywood downloads the torrent and by doing things like this: For large-scale per-viewer, implement a content identification strategy that allows you to trace back to specific clients, such as per-user session-based watermarking. With this approach, media is conditioned during transcoding and the origin serves a uniquely identifiable pattern of media segments to the end user.
<https://docs.aws.amazon.com/wellarchitected/latest/streaming...> |
|
|
|
|
| ▲ | willis936 12 hours ago | parent | prev [-] |
| Are these the kinds of issues privacy pass intends to fix? If so, what carrot and/or stick will get it adopted? |