| ▲ | Veserv 3 hours ago | |
You are not breaking H1, it just runs poorly in a different environment than the one it was created during. This is frankly already true which is why we literally have had two entire major versions since. A 160 KB congestion window with 50 ms RTT means you are limited to a maximum bandwidth of 3,200 MB/s (~25 Mbps). At 200 ms RTT you are limited to ~6.5 Mbps. At 32 KB you are getting ~5 Mbps and ~1 Mbps, respectively. If you are literally being limited to 1 Mbps, then you should not use a initial 160 KB congestion window as that is too much for your connection anyways. You can solve this with proper adaptive channel parameter detection in your network stack. In the presence of arbitrarily poor, degraded, or lossy network conditions, you should already be doing this to achieve good throughput and initial connection throughput. A proper design should only really have the problem of "we are literally sending more data which fundamentally takes a extra N units of time on our K rate connection". This is a problem that is still worth solving by reducing the size of the certificate chain, but if you have other problems than that then you should solve them as well. More pointedly, having problems other than that directly points at serious structural design deficiencies that are ossified and brittle. | ||