▲ | Galanwe 7 months ago | |||||||
> the best network I’ve ever benchmarked was AWS and measured about 55µs for a round trip What is "a network" here? Few infrastructures are optimised for latency, most are geared toward providing high throughput instead. In fact, apart from HFT, I don't think most businesses are all that latency sensitive. Most infrastructure providers will give you SLAs of high single or low double digits microseconds from Mahwa/Carteret to NY4, but these are private/dedicated links. There's little point to optimising latency when your network ends up on internet where the smallest hops are milliseconds away. | ||||||||
▲ | jiggawatts 7 months ago | parent [-] | |||||||
> There's little point to optimising latency when your network ends up on internet where the smallest hops are milliseconds away. That's just plain wrong. Lower latency always improves everything. Not just responsiveness, but also bandwidth! Because of TCP slow-start and congestion control algorithms, lower latency directly results in higher throughputs. Not to mention that these latencies add up, which is especially important with chatty microservices applications. Don't forget that typical TCP+HTTPS connections require something like 5 round trips, and that's assuming that the DNS record is already cached! Add in firewalls, load balancers, proxies, side-cars, ingress, and who knows what else, suddenly you're staring down the barrel of 15 millisecond latencies before the data can exit the data centre. The threshold for "instant" response is 16.7 ms end-to-end, including refreshing the HTML DOM and painting pixels to the screen. Google and AWS knows this, which is why their data centre networking have ~50µs latencies, some of the best in the industry. Everyone else: "Nah, don't bother!" | ||||||||
|