Remix.run Logo
raggi 2 hours ago

> You can already configure your initial congestion window, and if you are connecting to a system expecting the use of PQ encryption, you should set your initial congestion window to be large enough for the certificate; doing otherwise is height of incompetence and should be fixed.

The aggressive tone is no defense against practical problems such as the poor scalability of such a solution.

> You could also use better protocols like QUIC which has a independently flow controlled crypto stream and you can avoid amplification attacks by pre-sending adequate amounts of data to stop amplification prevention from activating.

Not before key exchange it doesn't. There's no magic bullet here.

A refresher on the state of TFO and QUIC PMTU might be worthwhile here before jumping this far ahead.

Veserv an hour ago | parent [-]

You have asserted without evidence that the increased certificate chain size is the primary scaling bottleneck. I assert that the bottleneck is most likely due to accidental complexity elsewhere on the argument that claimed problems look to be far in excess of the essential complexity.

> Not before key exchange it doesn't. There's no magic bullet here.

I was incorrect. Rereading the QUIC standard I see that they do not flow control the CRYPTO packet number space/stream. I thought they did because it is so easy to do that I did it as a afterthought. Truly another example of fundamental design errors introducing accidental complexity that should be fixed instead of papered over.

ekr____ 38 minutes ago | parent [-]

Can you elaborate a bit more about what you think the unnecessary complexity here?

A basic source of concern here is whether it's safe for the server to use an initial congestion window large enough to handle the entire PQ certificate chain without having an unacceptable risk of congestion collapse or other negative consequences. This is a fairly complicated question of network dynamics and the interaction of a bunch of different potentially machines sharing the same network resources, and is largely independent of the network protocol in use (QUIC versus TCP). It's possible that IW20 (or whatever) is fine, but it may well may not be.

There are two secondary issues: 1. Whether the certificate chain is consuming an unacceptable fraction of total bandwidth. I agree that this is less likely for many network flows, but as noted above, there are some flows where it is a large fraction of the total.

2. Potential additional latency introduced by packet loss and the necessary round trip. Every additional packet increases the chance of one of them being lost and you need the entire certificate chain.

It seems you disagree about the importance of these issues, which is an understandable position, but where you're losing me is that you seem to be attributing this to the design of the protocols we're using. Can you explain further how you think (for instance) QUIC could be different that would ameliorate these issues?