Remix.run Logo
indolering 6 hours ago

I did a large data analysis of DNS caching times across the web. Hyperscalers are the only ones who care and they fix that with insanely long DNS caching.

ekr____ 6 hours ago | parent [-]

I'm not trying to just nitpick you here, but, the message I was responding to said "People stopped caring about ulta-low latency first connect times back in the 90s.".

It seems to me that you're saying here that (1) the hyperscalers do care but (2) it's under control. I'm not necessarily arguing with (2) but as far as the hyperscalers go: (1) they drive a lot of traffic on their own (2) in many cases they care so their users don't have to.

indolering 5 hours ago | parent [-]

Sorry, the point I was trying to make is that this isn't a problem operationally.

Hyperscalers go to crazy lengths because they can measure monetary losses due to milliseconds of less view time and it's much easier when they have distributed cloud infrastructure anyway. But it's not really solving a problem for their customers. At least when I worked in DNS land ... latency micro-benchmarking was something of a joke. Like, sure, you can shave off a few tens of milliseconds, but it's super expensive. If you want to reduce latency, just up your TTL times and/or enable pre-fetching.

As a blocker for DNSSEC ... people made arguments about HTTPS overhead back in the day too. DoH also introduces latency, yet people aren't worried about that being a deal killer.

ekr____ 4 hours ago | parent [-]

> As a blocker for DNSSEC ... people made arguments about HTTPS overhead back in the day too.

They did, and then we spent an enormous amount of time to shave off a few round trip times in TLS 1.3 and QUIC. So I'm not sure this is as strong an argument as you seem to think it is.

> DoH also introduces latency, yet people aren't worried about that being a deal killer.

Actually, it really depends. It can actually be faster. Here are Mozilla's numbers from when we first rolled out DoH. https://blog.mozilla.org/futurereleases/2019/04/02/dns-over-...

And here are some measurements from Hounsel et al. https://arxiv.org/abs/1907.08089

indolering 4 hours ago | parent [-]

> They did, and then we spent an enormous amount of time to shave off a few round trip times in TLS 1.3 and QUIC.

But if it's worth doing for HTTP, why not for DNS?

> Actually, it really depends. It can actually be faster. Here are Mozilla's numbers from when we first rolled out DoH.

Oh fun!

ekr____ 4 hours ago | parent [-]

> But if it's worth doing for HTTP, why not for DNS?

I'm sorry I don't understand your question.

indolering 3 hours ago | parent [-]

The engineering effort! ECC solves the theoretical concerns around latency anyway yet we have people arguing that it shouldn't be done. But if it was worth making HTTPS faster to secure HTTP, why not DNS?

ekr____ 3 hours ago | parent | next [-]

Ah, I see what you're asking.

You're not going to find this answer satisfying, I suspect, but there are two main reasons browsers and big sites (that's what we're talking about) didn't bother to try to make DNSSEC faster:

1. They didn't think that DNSSEC did much in terms of security. I recognize you don't agree with this, but I'm just telling you what the thinking was. 2. Because there is substantial deployment of middleboxes which break DNSSEC, DNSSEC hard-fail by default is infeasible.

As a consequence, the easiest thing to do was just ignore DNSSEC.

You'll notice that they did think that encrypting DNS requests was important, as was protecting them from the local network, and so they put effort into DoH, which also had the benefit of being something you could do quickly and unilaterally.

akerl_ 3 hours ago | parent | prev [-]

HTTPS solved a bunch of real world threat models that were causing massive security issues. So we collectively put a bunch of engineering time into making it performant so that we could deploy it everywhere with minimal impact on UX and performance.

indolering 2 hours ago | parent [-]

DNSSEC also solves a bunch of real world threat models that do cause massive security issues. I think we should put that effort into DNS as well.

tptacek 2 hours ago | parent | next [-]

Somehow they cause these massive security issues without impacting the 95%+ of sites that haven't used the protocol since it became viable to adopt a decade and a half ago.

It's just a very difficult statistic to get around! Whenever you make a claim like this, you're going to have address the fact that basically ~every high-security organization on the Internet has chosen not to adopt the protocol, and there are basically zero stories about how this has bit any of them.

akerl_ 2 hours ago | parent | prev [-]

Does it?

I run a bunch of websites personally. I have ACME-issued TLS certificates from LetsEncrypt. I monitor the Certificate Transparency logs, and have CAA records set.

What's the threat model that should worry me, where DNSSEC is the right improvement?