Remix.run Logo
mgaunard 5 hours ago

and then cloudflare converts that to http/2 or even 1.1 for the backend

vanviegen 5 hours ago | parent [-]

So? Those protocols work fine within the reliable low latency network of a datacenter.

wongarsu 4 hours ago | parent [-]

I'd even go as far as claiming that on reliable wired connections (like between cloudflare and your backend) HTTP/2 is superior to HTTP/3. Choosing HTTP/3 for that part of the journey would be a downgrade

klempner 4 hours ago | parent | next [-]

At the very least, the benefits of QUIC are very very dubious for low RTT connections like inside a datacenter, especially when you're losing a bunch of hardware support and moving a fair bit of actual work to userspace where threads need to be scheduled etc. On the other hand Cloudflare to backend is not necessarily low RTT and likely has nonzero congestion.

With that said, I am 100% in agreement that the primary benefits of QUIC in most cases would be between client and CDN, whereas the costs are comparable at every hop.

hshdhdhehd 3 hours ago | parent [-]

Is CF typically serving from the edge, or serving from the nearest to the server? I imagine it would be from the edge so that it can CDN what it can. So... most of the time it wont be a low latency connection from CF to backend. Unless your back end is globally distributed too.

immibis 7 minutes ago | parent | prev [-]

Also, within a single server, you should not use HTTP between your frontend nginx and your application server - use FastCGI or SCGI instead, as they preserve metadata (like client IP) much better. You can also use them over the network within a datacenter, in theory.