Remix.run Logo
gtirloni 7 months ago

Do you mean 1-2ms?

eqvinox 7 months ago | parent [-]

No, 1-2us is correct for that — in a datacenter, with cut-through switching.

gtirloni 7 months ago | parent | next [-]

That's really impressive. I need to update myself on this topic. Thanks.

mickg10 7 months ago | parent [-]

In reality - with decent switches at 25g - and no fec - node to node is reliably under 300ns (0.3 us)

jiggawatts 7 months ago | parent | prev [-]

Meanwhile the best network I’ve ever benchmarked was AWS and measured about 55µs for a round trip!

What on earth are you using that gets you down to single digits!?

Galanwe 7 months ago | parent | next [-]

> the best network I’ve ever benchmarked was AWS and measured about 55µs for a round trip

What is "a network" here?

Few infrastructures are optimised for latency, most are geared toward providing high throughput instead.

In fact, apart from HFT, I don't think most businesses are all that latency sensitive. Most infrastructure providers will give you SLAs of high single or low double digits microseconds from Mahwa/Carteret to NY4, but these are private/dedicated links. There's little point to optimising latency when your network ends up on internet where the smallest hops are milliseconds away.

dahfizz 7 months ago | parent | prev | next [-]

The key is that blibbe is talking about switches. Modern switches can process packets at line rate.

If you're working in AWS, you almost certainly are hitting a router, which is comparably slower. Not to mention you are dealing with virtualized hardware, and you are probably sharing all the switches & routers along your path (if someone else's packet is ahead of yours in the queue, you have to wait).

crest 7 months ago | parent | prev | next [-]

I assume 1-3 hops of modern switches without congestion. Given 100Gb/s lanes these numbers are possible if you get all the bottlenecks out of the way. The moment you hit a deep queue the latency explodes.

7 months ago | parent | prev | next [-]
[deleted]
blibble 7 months ago | parent | prev [-]

that's because cloud networks are complete shit

this is xilinux/mellanox cards with kernel bypass and cut-through switches with busy-waiting

in reality, in a prod system