| ▲ | nh2 5 hours ago | |
What does "high performance" mean here? I get 40 Gbit/s over a single localhost TCP stream on my 10 years old laptop with iperf3. So the TCP does not seem to be a bottleneck if 40 Gbit/s is "high" enough, which it probably is currently for most people. I have also seen plenty situations in which TCP is faster than UDP in datacenters. For example, on Hetzner Cloud VMs, iperf3 gets me 7 Gbit/s over TCP but only 1.5 Gbit/s over UDP. On Hetzner dedicated servers with 10 Gbit links, I get 10 Gbit/s over TCP but only 4.5 Gbit/s over UDP. But this could also be due to my use of iperf3 or its implementation. I also suspect that TCP being a protocol whose state is inspectable by the network equipment between endpoints allows implementing higher performance, but I have not validated if that is done. | ||
| ▲ | KaiserPro an hour ago | parent | next [-] | |
Aspera was/is designed for high latency links. Ie sending multi terabytes from london to new Zealand, or LA For that use case, Aspera was the best tool for the job. It's designed to be fast over links that single TCP streams couldn't You could, if you were so bold, stack up multiple TCP links and send data down those. You got the same speed, but possible not the same efficiency. It was a fucktonne cheaper to do though. | ||
| ▲ | wtallis 42 minutes ago | parent | prev | next [-] | |
> I get 40 Gbit/s over a single localhost TCP stream on my 10 years old laptop with iperf3. Do you mean literally just streaming data from one process to another on the same machine, without that data ever actually transiting a real network link? There's so many caveats to that test that it's basically worthless for evaluating what could happen on a real network. | ||
| ▲ | mprovost 44 minutes ago | parent | prev [-] | |
High performance means transferring files from NZ to a director's yacht in the Mediterranean with a 40Mbps satellite link and getting 40Mbps, to the point that the link is unusable for anyone else. | ||