| ▲ | lol768 2 days ago |
| The ship has very much sailed now with ballot SC63, and this is the result, but I still don't think CRLs are remotely a perfect solution (nor do I think OCSP was unfixable). You run into so many problems with the size of them, the updates not propagating immediately etc. It's just an ugly solution to the problem, that you then have to introduce further hacks (Bloom filters) atop of it all to make the whole mess work. I'm glad that Mozilla have done lots of work in this area with CRLite, but it does all feel like a bodge. The advantages of OCSP were that you got a real-time understanding of the status of a certificate and you had no need to download large CRLs which become stale very quickly. If you set security.ocsp.require in the browser appropriately then you didn't have any risk of the browser failing open, either. I did that in the browser I was daily-driving for years and can count on one hand the number of times I ran into OCSP responder outages. The privacy concerns could have been solved through adoption of Must-Staple, and you could then operate the OCSP responders purely for web-servers and folks doing research. And let's not pretend users aren't already sending all the hostnames they are visiting to their selected DNS server. Why is that somehow okay, but OCSP not? |
|
| ▲ | ekr____ 2 days ago | parent | next [-] |
| The problem with requiring OCSP stapling is that it's not practically enforceable without breakage. The underlying dynamics of any change to the Web ecosystem is that it has to be incrementally deployable, in the sense that when element A changes it doesn't experience breakage with the existing ecosystem. At present, approximately no Web servers do OCSP stapling, so any browser which requires it will just not work. In the past, when browsers want to make changes like this, they have had to give years of warning and then they can only actually make the change once nearly the entire ecosystem has switched and so you have minimal breakage. This is a huge effort an only worth doing when you have a real problem. As a reference point, it took something like 7 years to disable SHA-1 in browsers [0], and that was an easier problem because (1) CAs were already transitioning (2) it didn't require any change to the servers, unlike OCSP stapling which requires them to regularly fetch OCSP responses [1] and (3) there was a clear security reason to make the change. By contrast, with Firefox's introduction of CRLite, all the major browsers now have some central revocation system, which works today as opposed to years from now and doesn't require any change to the servers. [0] https://security.googleblog.com/2014/09/gradually-sunsetting...
[1] As an aside it's not clear that OCSP stapling is better than short-lived certs. |
| |
| ▲ | lol768 2 days ago | parent | next [-] | | I think you are correct. There were similar issues with Firefox rolling out SameSite=Lax by default, and I think those plans are now indefinitely on hold as a result of the breakage it caused. It's a hard problem to solve. > As an aside it's not clear that OCSP stapling is better than short-lived certs. I agree this should be the end goal, really. | | |
| ▲ | catlifeonmars a day ago | parent [-] | | Oh wow. I thought SameSite=Lax by default was a done deal. It shows how much I’ve been following in the past few years. |
| |
| ▲ | dlenski a day ago | parent | prev [-] | | > The underlying dynamics of any change to the Web ecosystem is that it has to be incrementally deployable, in the sense that when element A changes it doesn't experience breakage with the existing ecosystem. Absolutely, this is important. But I don't understand why this should have any effect on OCSP-stapling vs. CRL. As you note, "approximately no Web servers do OCSP stapling, so any browser which requires it will just not work." But browsers also cannot rely on CRLs being 100% available and up-to-date. Enforcing OCSP stapling and enforcing a check against an up-to-date CRL would both require this kind of incremental or iterative deployment. > As an aside it's not clear that OCSP stapling is better than short-lived certs. This is equally applicable to CRL, though. The current plan for phased reduction of TLS cert lifespan is to stabilize at 47 days in 2029. If reducing cert lifetime achieves the goal of reducing the value of compromised certs, then any mechanism for revoking/invalidating certificates will be reduced in value. |
|
|
| ▲ | woodruffw 2 days ago | parent | prev | next [-] |
| > Why is that somehow okay, but OCSP not? I think the argument isn’t that it’s okay, but that one bad thing doesn’t mean we should do two bad things. Just because my DNS provider can see my domain requests doesn’t mean I also want arbitrary CAs on the Internet to also see them. |
| |
| ▲ | dogma1138 2 days ago | parent [-] | | I never understood why they didn’t tried to push OSCP into DNS. You have to trust the DNS server more than you trust the server you are reaching out to as the DNS server can direct you anywhere as well as see everything you are trying to access anyhow. | | |
| ▲ | cortesoft 2 days ago | parent | next [-] | | TLS is to protect you from malicious actors somewhere along your connection path. DNS can't help you. Just imagine you succeeded in inventing a perfectly secure DNS server. Great, we know this IP address we just got back is the correct one for the server. Ok, then I go to make a connection to that IP address, but someone on hop 3 of my connection is malicious, and instead of connecting me to the IP, just sends back a response pretending to be from that IP. How would I discover this? TLS would protect me from this, perfectly secure DNS won't. | | |
| ▲ | parliament32 a day ago | parent [-] | | If you had a perfectly secure DNS service, you could just stick the certificate fingerprint in DNS and be done. No need for trust stores, CAs, trust chains, OSCP/CRLs... | | |
| ▲ | waste_monk a day ago | parent | next [-] | | Indeed, this already exists with record types such as TLSA, SMIMEA, and CERT. However I don't believe I've ever seen it used "in the wild". | |
| ▲ | cortesoft a day ago | parent | prev [-] | | How would you revoke a certificate? If you are running a malicious DNS server, couldn't you just refuse the update and keep servicing the prior results? | | |
| ▲ | parliament32 a day ago | parent [-] | | If the DNS service is "perfectly secure", we're assuming MITM attacks are impossible. You wouldn't need to revoke anything, you just update the fingerprint in the record. | | |
| ▲ | cortesoft a day ago | parent [-] | | Why would DNS being perfectly secure make MITM attacks impossible? It might be impossible to hijack DNS, but after DNS resolution happens, then the actual connection via IP address has to happen. If you are saying every packet sent is secure, then it would have nothing to do with DNS? | | |
| ▲ | cyphar a day ago | parent [-] | | You could store the certificate hashes in DNS (i.e., use DANE instead of the CA PKI) and so a MITM on the actual connection wouldn't succeed. | | |
| ▲ | cortesoft a day ago | parent [-] | | Right, but what if the certificate is compromised? How would your revoke it? | | |
| ▲ | cyphar 20 hours ago | parent [-] | | If the DNS entries for the certificates have a very short TTLs (i.e., 2 minutes) then you wouldn't need explicit revocation infrastructure. It would probably take more than 2 minutes for CRLs or OSCP changes to propagate anyway. (I'm not necessarily in favour of this, I just don't see the revocation part as being the main issue.) |
|
|
|
|
|
|
| |
| ▲ | cyphar 2 days ago | parent | prev | next [-] | | Because one of the main things TLS is intended to defend against is malicious / MITM'd DNS servers? If DNS was trustworthy then the entirety of TLS PKI would be entirely redundant... | | |
| ▲ | crote a day ago | parent | next [-] | | Does it, though? In practice, TLS certificates are given out to domain owners, and domain ownership is usually proven by being able to set a DNS record. This means compromise of the authorative DNS server implies compromise of TLS. Malicious relaying servers and MitM on the client is already solved by DNSSEC, so it's not adding anything there either. If we got rid of CAs and stored our TLS public keys in DNS instead, we would lose relatively little security. The main drawback I can think of is the loss of certificate issuance logs. | | |
| ▲ | ekr____ a day ago | parent [-] | | > In practice, TLS certificates are given out to domain owners, and domain ownership is usually proven by being able to set a DNS record. This means compromise of the authorative DNS server implies compromise of TLS. Yes, except for CT, which can help detect this kind of attack. > Malicious relaying servers and MitM on the client is already solved by DNSSEC, so it's not adding anything there either. I'm not sure quite what you have in mind here, but there is more to the issue than correct DNS resolution. In many cases, the attacker controls the network between you and the server, and can intercept your connection regardless of whether DNS resolved correctly. > If we got rid of CAs and stored our TLS public keys in DNS instead, we would lose relatively little security. The main drawback I can think of is the loss of certificate issuance logs. This may be true in principle but has a very low chance of happening in practice, because there is no current plausible transition path, so it's really just a theoretical debate. | | |
| ▲ | cyphar a day ago | parent [-] | | > This may be true in principle but has a very low chance of happening in practice, because there is no current plausible transition path, so it's really just a theoretical debate. Well, DANE exists and provides an obvious transition path, as brittle of an approach it is. Ideally you would be able to create your own intermediates (with name constraints) and pin the intermediate rather than the lead certificate, but PKI isn't set up for that. From my understanding, the biggest issue with DNSSEC is that it's just a return to the single signing authority model that TLS used in the 90s. Isn't it also just Verisign again? (At least for .com.) | | |
| ▲ | tptacek 2 hours ago | parent [-] | | That is a problem, but beyond the philosophical problem (which I care a lot about) and the cryptographic problems (which I care a lot about), most of the reason DANE isn't taken seriously (is, in fact, a dead letter with the browsers, meaning it's a dead letter everywhere) are practical issues deploying it. Stipulate, very unrealistically, that a sizable portion of the most popular zones were signed (the opposite is true). Then: You still have the problem where a substantial cohort of Internet users can't resolve DANE records. They're on Internet paths that include middleboxes that freak out when they see anything but simple UDP DNS records. You can't define that problem away. So now you need to design a fallback for those users. Whatever that fallback is, you have to assume attackers will target it; that's the whole point of the exercise. What you end up with a system that decays to the natural security level of the WebPKI. From a threat model perspective, what you've really done is just add another CA to the system. Not better! DANE advocates tried for years to work around this problem by factoring out the DNS from DANE, and stapling DANE records to TLS handshakes. Then someone asked, "well, what happens when attackers just strip that out of the handshake". These records are used to authenticate the handshake, so you can't just set "the handshake will be secure" as an axiom. Nobody had a good answer! The DANE advocates were left saying we'd be doing something like HPKP, where browsers would remember DANE-stapled hosts after first contact. The browser vendors said "lol no". That's where things stand. The stapling thing was so bad that Geoff Huston --- a DNS/DNSSEC éminence grise --- wrote a long blog post asking (and more or less conceding) that it was time to stick a fork in the whole thing. |
|
|
| |
| ▲ | ekr____ a day ago | parent | prev | next [-] | | This isn't true. Even if the DNS server is secure, the network between you and the server cannot be trusted. | | |
| ▲ | cyphar a day ago | parent [-] | | If DNS was presumed secure (i.e., secure against MITM at all points in the chain) you could just stuff the public key into a DNS record (a-la DANE) and remove the need for PKI. I'm saying there would be no need for CAs -- you could just trust self-signed certs. Some might argue DNSSEC solves this already, I'm not particularly convinced it's any better than the original CA cabal. |
| |
| ▲ | jve 2 days ago | parent | prev | next [-] | | > then the entirety of TLS PKI would be entirely redundant... Don't think I agree with this. TLS is important against MITM scenarios - integrity, privacy. You don't need DNS for this to be abused but a man in the middle - whether that is some open wifi, ISP or tapped into your network any other way. | | |
| ▲ | cyphar a day ago | parent [-] | | If DNS was presumed secure (i.e., secure against MITM at all points in the chain) you could just stuff the public key into a DNS record (a-la DANE) and remove the need for PKI. There would be no need for the authentication provided by CAs, but you would still want to use TLS. Some might argue DNSSEC solves this already, I'm not particularly convinced it's any better than the original CA cabal. |
| |
| ▲ | catlifeonmars a day ago | parent | prev [-] | | > If DNS was trustworthy then the entirety of TLS PKI would be entirely redundant I’m not sure I understand the logic here. To me TLS PKI and DNS are somewhat orthogonal. |
| |
| ▲ | woodruffw 2 days ago | parent | prev | next [-] | | How would that work in the current reality of the DNS? The current reality is that it’s unauthenticated and indeterminately forwarded/cached, neither of which screams success for timely, authentic OCSP responses. | | | |
| ▲ | blackcatsec a day ago | parent | prev [-] | | Sounds like something DANE could be used for. |
|
|
|
| ▲ | dadrian 2 days ago | parent | prev | next [-] |
| OCSP stapling, when done correctly with fallback issuance, is just a worse solution than short-lived certificates. OCSP lifetimes are 10 days. I wrote about this some here [1]. [1]: https://dadrian.io/blog/posts/revocation-aint-no-thang/ |
|
| ▲ | dlenski a day ago | parent | prev | next [-] |
| > I still don't think CRLs are remotely a perfect solution (nor do I think OCSP was unfixable) > The privacy concerns could have been solved through adoption of Must-Staple Agreed. I haven't followed every bit of the play-by-play here, but OCSP (multi-)stapling appeared to me to be a good solution to both the end-user privacy concerns and to the performance concerns. |
|
| ▲ | PunchyHamster 2 days ago | parent | prev | next [-] |
| It's funny that putting some random records in DNS is enough to have enough "ownership" to make a cert for one but we can't use same method for publishing revoking |
| |
| ▲ | ocdtrekkie 2 days ago | parent [-] | | The entire existence of CAs is a pointless and mystical venture to ensure centralized control of the Internet that, since now entirely domain-validated, provides absolutely no security benefits over DNS. If your domain register/name server provider is compromised, CAs are already a lost cause. | | |
| ▲ | ekr____ 2 days ago | parent | next [-] | | This isn't correct, because your domain name server may be insecure even while the one used by the CA is secure. Moreover, CT helps detect misissuance but does not detect incorrect responses by your resolver. | | |
| ▲ | ocdtrekkie 2 days ago | parent [-] | | If someone can log into your domain registrar account or your web host, they can issue themselves a complete valid certificate. It won't matter if the CA resolver is secure, because the attacker can successfully validate domain control. | | |
| ▲ | ekr____ a day ago | parent [-] | | Yes, that's correct. The purpose of the WebPKI and TLS is not to protect against this form of attack but rather to protect against compromise of the network between the client and the server. |
|
| |
| ▲ | tptacek 2 days ago | parent | prev [-] | | The DNS is more centralized than the WebPKI. | | |
| ▲ | teddyh 2 days ago | parent | next [-] | | DNS isn’t centralized; it’s federated. I mean, just because there’s an ISO and a UN does not mean there is a single world government. (Repost: <https://news.ycombinator.com/item?id=38695674>) | | |
| ▲ | tptacek a day ago | parent [-] | | The distinction you're trying to draw here isn't relevant to the argument on the thread. "Centralization" is the other commenter's metric of concern, not mine. |
| |
| ▲ | ocdtrekkie 2 days ago | parent | prev | next [-] | | Three browser companies on the west coast of the US effectively control all decisionmaking for WebPKI. The entire membership of the CA/B is what, a few dozen? Mostly companies which have no reason to exist except serving math equations for rent. How many companies now run TLDs? Yeah, .com is centralized, but between ccTLDs, new TLDs, etc., tons. And domain registrars and web hosts which provide DNS services? Thousands. And importantly, hosting companies and DNS providers are trivially easy to change between. The idea Apple or Google can unilaterally decide what the baseline requirements should be needs to be understood as an existential threat to the Internet. And again, every single requirement CAs implement is irrelevant if someone can log into your web host. The entire thing is an emperor has no clothes thing. | | |
| ▲ | tptacek 2 days ago | parent [-] | | Incoherent. Browser vendors exert control by dint of controlling the browsers themselves, and are in the picture regardless of the trust system used for TLS. The question is, which is more centralized: the current WebPKI, which you say is also completely dependent on the DNS but involves more companies, or the DNS itself, which is axiomatically fewer companies? I always love when people bring the ccTLDs into these discussions, as if Google could leave .COM when .COM's utterly unaccountable ownership manipulates the DNS to intercept Google Mail. | | |
| ▲ | teddyh 2 days ago | parent [-] | | > when .COM's utterly unaccountable ownership manipulates the DNS to intercept Google Mail. Why is this more likely to happen than a rogue CA issuing a false certificate? Also, Google has chosen to trust .com instead of using one of their eleven TLDs that they own for their own exclusive use, or any of the additional 22 TLDs that they also operate. | | |
| ▲ | akerl_ 2 days ago | parent [-] | | When a rogue CA issues a bad cert, they get delisted from all major browsers and are effectively destroyed. That isn’t possible with .com | | |
| ▲ | teddyh a day ago | parent [-] | | The DNS is federated and hierarchical. A domain name (including top-level domains) is controlled by a single entity. If you do not trust that entity, you cannot trust that domain or top-level domain, or anything beneath that in the tree. But given that you trust the root zone, you can still (potentially) trust other subtrees in the DNS, like other top-level domains. This is not the case with a CA, however; you are forced to trust all of them, and hope that when fradulent certificates are issued (as has happened several times, IIUC), that they will not affect you. | | |
| ▲ | akerl_ a day ago | parent [-] | | In fact you don't have to trust any of them, since browser root stores enforce certificate transparency. But also the issues of segmentation are pretty much a total shift of the goalposts from what we were discussing, which is what actually happens when malicious activity occurs. In DNS, your only option is to stop trusting that slice of the tree and for every site operator to lift and shift to another TLD, inclusive of teaching all their users to use the new site. In WebPKI, the CA gets delisted for new certificate issuance and site operators get new certificates before the current ones expire. One of those is insane, and the other has successfully happened several times in response to bad/rogue CAs. |
|
|
|
|
| |
| ▲ | otabdeveloper4 2 days ago | parent | prev [-] | | No. You can host your own DNS. It's easy and practically free. | | |
| ▲ | peanut-walrus 2 days ago | parent [-] | | Your TLD registry operator still technically remains fully in control of your records. I am actually surprised more of them have not abused their power so far. | | |
| ▲ | crote a day ago | parent [-] | | Most TLD operators are non-profit foundations set up by nerds in the early days of the internet, well before the lawyers, politicians, and MBAs could get their hands on it. If you want to see what happens otherwise, just look at the gTLD landscape. Still, genuine power abuse is relatively rare, because to a large extent they are selling trust. If you start randomly taking down domains, nobody will ever risk registering a domain with you again. | | |
| ▲ | tptacek a day ago | parent [-] | | The most important TLDs are decidedly not non-profit foundations run by the nerds who set them up in the 1980s, and governments routinely manipulate the DNS for policy reasons. | | |
| ▲ | otabdeveloper4 a day ago | parent [-] | | You don't actually need a domain with an ""important"" TLD. True story. | | |
| ▲ | akerl_ a day ago | parent [-] | | What TLDs are today operated by non-profits? Looking at the list, I see a mix of commercial entities running them for profit and governments. | | |
| ▲ | otabdeveloper4 15 hours ago | parent [-] | | You don't actually need a non-profit TLD either. Having a healthy competitive market for DNS services is good enough. | | |
| ▲ | akerl_ 11 hours ago | parent [-] | | Really? If .com or .io or some other popular TLD starts acting maliciously, what’s the route to handling that problem? |
|
|
|
|
|
|
|
|
|
|
|
| ▲ | gerdesj 2 days ago | parent | prev [-] |
| "And let's not pretend users aren't already sending all the hostnames they are visiting to their selected DNS server. Why is that somehow okay, but OCSP not?" Running your own DNS server is rather easier than messing with OCSP. You do at least have a choice, even if it is bloody complicated. SSL certs (and I refuse to call them TLS) will soon have a required lifetime of forty something days. OCSP and the rest becomes moot. |
| |
| ▲ | dogma1138 2 days ago | parent [-] | | You still are reaching out to authoritative servers for that domain so someone else other than the destination knows what you are looking for. The 47 day life expectancy isn’t going to come until 2029 and it might get pushed. Also 47 days is still too long if certificates are compromised. | | |
| ▲ | the8472 2 days ago | parent | next [-] | | The authoritative servers for a domain are likely to be operated by the same entity as the domain itself. | |
| ▲ | cyberax 2 days ago | parent | prev [-] | | You can request 6-day certificates from Let's Encrypt. There's a clear path towards 24-hour certificates. This will be pretty much equivalent to the current status quo with the OCSP stapling. | | |
| ▲ | akerl_ a day ago | parent [-] | | Is that live yet? (Not asking to be critical; I was keeping an eye out because I wanted to migrate but last I saw, 6 day certs were still in testing-only). | | |
| ▲ | cyberax a day ago | parent [-] | | It's in a beta now, they are planning to release it very very soon. |
|
|
|
|