Remix.run Logo
johncolanduoni 12 hours ago

> And sticking with that example, once it becomes TCP's job to handle packet loss due to transmission errors (instead of just congestion), things go south pretty quickly.

Outside of wireless links (where FEC of some degree is necessary regardless) this is mostly because TCP’s checksum is so weak. QUIC for example handles this much better, since the packet’s authenticated encryption doubles as a robust error detecting code. And unlike TLS over TCP, the connection is resilient to these failures: a TCP packet that is corrupted but passes the TCP checksum will kill the TLS connection on top of it instead of retransmitting.

lxgr 11 hours ago | parent [-]

Ah, I meant go south in terms of performance, not correctness. Most TCP congestion control algorithms interpret loss exclusively as a congestion signal, since that's what most lower layers have historically presented to it.

This is why newer TCP variants that use different congestion signals can deal with networks that violate that assumption better, such as e.g. Starlink: https://blog.apnic.net/2024/05/17/a-transport-protocols-view...

Other than that, I didn't realize that TLS has no way of just retransmitting broken data without breaking the entire connection (and a potentially expensive request or response with it)! Makes sense at that layer, but I never thought about it in detail. Good to know, thank you.