|
| ▲ | Galanwe 7 months ago | parent | next [-] |
| > the best network I’ve ever benchmarked was AWS and measured about 55µs for a round trip What is "a network" here? Few infrastructures are optimised for latency, most are geared toward providing high throughput instead. In fact, apart from HFT, I don't think most businesses are all that latency sensitive. Most infrastructure providers will give you SLAs of high single or low double digits microseconds from Mahwa/Carteret to NY4, but these are private/dedicated links. There's little point to optimising latency when your network ends up on internet where the smallest hops are milliseconds away. |
|
| ▲ | dahfizz 7 months ago | parent | prev | next [-] |
| The key is that blibbe is talking about switches. Modern switches can process packets at line rate. If you're working in AWS, you almost certainly are hitting a router, which is comparably slower. Not to mention you are dealing with virtualized hardware, and you are probably sharing all the switches & routers along your path (if someone else's packet is ahead of yours in the queue, you have to wait). |
|
| ▲ | crest 7 months ago | parent | prev | next [-] |
| I assume 1-3 hops of modern switches without congestion. Given 100Gb/s lanes these numbers are possible if you get all the bottlenecks out of the way. The moment you hit a deep queue the latency explodes. |
|
| ▲ | 7 months ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | blibble 7 months ago | parent | prev [-] |
| that's because cloud networks are complete shit this is xilinux/mellanox cards with kernel bypass and cut-through switches with busy-waiting in reality, in a prod system |