Remix.run Logo
Veserv 4 hours ago

Exactly, HTTP/1.1 is a poorly designed protocol and there are good reasons why we have newer versions of HTTP which avoid multiple unnecessary encryption handshakes.

Exactly, using a blanket default initial congestion window of 16 KB is stupid. Even ignoring that it was chosen when average bandwidth was many times less and thus should be increased anyways to something on the order of the average BDP or you should use a better congestion control algorithm, it is especially stupid if you are beginning a connection that has a known minimum requirement before useful data can be sent.

These things should be fixed as well instead of papering them over. Your system should work well regardless of the size of the certificate chain except for the fundamental overhead of having a larger chain.

bastawhiz 3 hours ago | parent [-]

I mean, unless you stop supporting H1, you're stuck with it. "Fixing" it means killing it. Unless you break every site/API that uses it, you can't do that.

Increasing the initial congestion window is probably smart, but increasing it to a size large enough to hold a 160kb certificate is almost certainly a terrible idea. Lots of people with "broadband" probably never get close to 160kb congestion window size.

Flaky wifi or a bad mobile signal will probably never get above a 32kb congestion window size—that's today, with modern hardware. That's five round trips assuming you start at 32kb and it never increases.

You think airplane wifi is bad? Imagine how bad it'll be when the congestion window starts at an order of magnitude bigger than it would normally ever reach. The "fix" means... Well I don't know actually, because if it could be good, you'd think at least one carrier would have good in-flight wifi. I doubt you could overcome to bureaucratic and technical challenges.

This isn't a problem that can be "fixed" in a lot of cases. If you optimize for the happy path, you're not just hurting people who literally don't have another option, you're hurting yourself when under bad connections.

Veserv 3 hours ago | parent [-]

You are not breaking H1, it just runs poorly in a different environment than the one it was created during. This is frankly already true which is why we literally have had two entire major versions since.

A 160 KB congestion window with 50 ms RTT means you are limited to a maximum bandwidth of 3,200 MB/s (~25 Mbps). At 200 ms RTT you are limited to ~6.5 Mbps. At 32 KB you are getting ~5 Mbps and ~1 Mbps, respectively.

If you are literally being limited to 1 Mbps, then you should not use a initial 160 KB congestion window as that is too much for your connection anyways. You can solve this with proper adaptive channel parameter detection in your network stack. In the presence of arbitrarily poor, degraded, or lossy network conditions, you should already be doing this to achieve good throughput and initial connection throughput.

A proper design should only really have the problem of "we are literally sending more data which fundamentally takes a extra N units of time on our K rate connection". This is a problem that is still worth solving by reducing the size of the certificate chain, but if you have other problems than that then you should solve them as well. More pointedly, having problems other than that directly points at serious structural design deficiencies that are ossified and brittle.