| ▲ | agwa 4 hours ago | |||||||||||||||||||||||||||||||||||||
At the beginning of a TCP connection, which is when the certificate chain is sent, you can't send more data than the initial congestion window without waiting for it to be acknowledged. 160KB is far beyond the initial congestion window, so on a high-latency connection the additional time would be higher than the numbers you calculated. Of course, if the web page is very bloated the user might not notice, but not all pages are bloated. The increased certificate size would also be painful for Certificate Transparency logs, which are required to store certificates and transmit them to anyone who asks. MTC doesn't require logs to store the subject public key. | ||||||||||||||||||||||||||||||||||||||
| ▲ | Veserv 4 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||
That is exactly the type of poor design that I was saying should be rectified. You can already configure your initial congestion window, and if you are connecting to a system expecting the use of PQ encryption, you should set your initial congestion window to be large enough for the certificate; doing otherwise is height of incompetence and should be fixed. You could also use better protocols like QUIC which has a independently flow controlled crypto stream and you can avoid amplification attacks by pre-sending adequate amounts of data to stop amplification prevention from activating. And I fail to see how going from 4 KB of certificate chain to 160 KB of certificate chain poses a serious storage or transmission problem. You can fit literal millions into RAM on reasonable servers. You can fit literal billions into storage on reasonable servers. Sure, if you exactly right-sized your CT servers you might need to upgrade them, but the absolute amount of resources you need for this is miniscule. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||