| ▲ | eqvinox 16 hours ago |
| No, 1-2us is correct for that — in a datacenter, with cut-through switching. |
|
| ▲ | gtirloni 15 hours ago | parent | next [-] |
| That's really impressive. I need to update myself on this topic. Thanks. |
| |
| ▲ | mickg10 14 hours ago | parent [-] | | In reality - with decent switches at 25g - and no fec - node to node is reliably under 300ns (0.3 us) | | |
| ▲ | znyboy 13 hours ago | parent | next [-] | | Considering that 300 light-nanoseconds is about 90m, getting a response (or even just one-way) in that time is essentially running right at the limits of physics/causality. | | | |
| ▲ | davekeck 14 hours ago | parent | prev | next [-] | | Out of curiosity, how is that measured across machines? (The first thing that comes to my mind would be to use an oscilloscope with two probes, one to each machine, but I’m guessing that’s not it.) | | |
| ▲ | toast0 10 hours ago | parent [-] | | Measure the round trip and divide by two for the approximate one way time. It'd be really neat to measure the time it takes for a packet to travel in one direction, but it's somewhere between hard and impossible[1]; a very short path has less room to be asymetric though. [1] If the clocks are synchronized, you can measure send time on one end, and receive time on the other. But synchronizing clocks involves estimating the time it takes for signals to pass im each direction, typically assuming each direction takes half the round trip. | | |
| |
| ▲ | 14 hours ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | jiggawatts 9 hours ago | parent | prev [-] |
| Meanwhile the best network I’ve ever benchmarked was AWS and measured about 55µs for a round trip! What on earth are you using that gets you down to single digits!? |
| |
| ▲ | Galanwe an hour ago | parent | next [-] | | > the best network I’ve ever benchmarked was AWS and measured about 55µs for a round trip What is "a network" here? Few infrastructures are optimised for latency, most are geared toward providing high throughput instead. In fact, apart from HFT, I don't think most businesses are all that latency sensitive. Most infrastructure providers will give you SLAs of high single or low double digits microseconds from Mahwa/Carteret to NY4, but these are private/dedicated links. There's little point to optimising latency when your network ends up on internet where the smallest hops are milliseconds away. | |
| ▲ | crest 6 hours ago | parent | prev | next [-] | | I assume 1-3 hops of modern switches without congestion. Given 100Gb/s lanes these numbers are possible if you get all the bottlenecks out of the way. The moment you hit a deep queue the latency explodes. | | |
| ▲ | jiggawatts 5 hours ago | parent [-] | | So, are you talking about theoretical latencies here based on bandwidths and cable lengths, or actual measured latencies end-to-end between hosts? I know that "in principle" the physics of the cabling allows single digit microseconds, but I've never seen it anywhere near that low even with cross-over cables with zero switches in-path! | | |
| ▲ | eqvinox 4 hours ago | parent [-] | | You need high bandwidth links (time to get the entire packet across starts to matter), run on bare metal (or have very well working HW virtualisation support), and tune NIC parameters and OS processing appropriately. But it's practically achievable. Switches in these scenarios (e.g. 25GE DC targeted) are pretty predictable and add <1μs (unless misconfigured) |
|
| |
| ▲ | 6 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | blibble 3 hours ago | parent | prev [-] | | that's because cloud networks are complete shit this is xilinux/mellanox cards with kernel bypass and cut-through switches with busy-waiting in reality, in a prod system |
|