Remix.run Logo
ralferoo 5 days ago

I don't think he's saying it needs to be BitTorrent, just applying some principles from it.

For example, say you have a cluster of people on the call in the US and another cluster in the UK. Ping times are 100ms or more across the ocean, there will be some random packets lost, but ping times within the UK are around 15ms max. By working co-operatively and sharing among themselves the clients in one cluster can fill in missing packets from a different cluster far quicker than the requesting them from the originating host.

In general, the ability to request missing packets from a more local source should be able to improve overall video call quality. It still might be "too late", because for minimal latency, you might choose to use packets as soon as they arrive and maybe even treat out-of-order packets as missing, and just display a blockier video instead, but if the clients can tolerate a little more latency (maybe a tunable setting, like 50ms more than the best case) then it should in theory work better than current systems.

I've been mulling over some of these ideas myself in the past, but it's never been high enough on my TODO list to try anything out.

delusional 5 days ago | parent [-]

> By working co-operatively and sharing among themselves the clients in one cluster can fill in missing packets from a different cluster far quicker than the requesting them from the originating host.

That's only true if you assume the nodes operate sequentially, which is not given. If the nodes operate independently from one another (which they would, being non-cooperating) they'd all get a response in ~100ms (computation and signaling time is negligible here), which is faster than they could get it cooperatively, even if we assume perfect cooperation (100ms for the first local node + 15ms from there). It's parallelism. Doing less work might seem theoretically nice, but if you have the capacity to do the same work twice simultaneously you avoid the synchronization.

Basically, it falls somewhere in my loose "tree based system" sketch. In this case the "trusted" nodes would be picked based on ping time clustering, but the basic sketch that you pick a subset of nodes to be your local nodes and then let that structure recursively play out is the same.

The problem you run into is latency. There's no good way to pick a global latency figure for the whole network, since it varies by how deep into the tree you are. As the tree grows deeper, you end up having to retune the delay. The only other option is to grow in width at which point you've just created a another linear growth problem, albeit with a lower slope.