Remix.run Logo
rlpb 9 hours ago

> From a network point of view, BitTorrent is horrendous. It has no way of knowing network topology which frequently means traffic flows from eyeball network to eyeball network for which there is no "cheap" path available...

As a consumer, I pay the same for my data transfer regardless of the location of the endpoint though, and ISPs arrange peering accordingly. If this topology is common then I expect ISPs to adjust their arrangements to cater for it, just the same as any other topology.

dotwaffle 9 hours ago | parent [-]

> ISPs arrange peering accordingly

Two eyeball networks (consumer/business ISPs) are unlikely to have large PNIs with each other across wide geographical areas to cover sudden bursts of traffic between them. They will, however, have substantial capacity to content networks (not just CDNs, but AWS/Google etc) which is what they will have built out.

BitTorrent turns fairly predictable "North/South" traffic where capacity can be planned in advance and handed off "hot potato" as quickly as possible, into what is essentially "East/West" with no clear consistency which would cause massive amounts of congestion and/or unused capacity as they have to carry it potentially over long distances they have not been used to, with no guarantee that this large flow will exist in a few weeks time.

If BitTorrent knew network topology, it could act smarter -- CDNs accept BGP feeds from carriers and ISPs so that they can steer the traffic, this isn't practical for BitTorrent!

Sesse__ 7 hours ago | parent | next [-]

> If BitTorrent knew network topology, it could act smarter -- CDNs accept BGP feeds from carriers and ISPs so that they can steer the traffic, this isn't practical for BitTorrent!

AFAIK this has been suggested a number of times, but has been refused out of fears of creating “islands” that carry distinct sets of chunks. It is, of course, an non-issue if you have a large number of fast seeds around the world (and if the tracker would give you those reliably instead of just a random set of peers!), but that really isn't what BT is optimized for in practice.

dotwaffle 4 hours ago | parent [-]

Exactly. As it happens, this is an area I'm working on right now -- instead of using a star topology (direct), or a mesh (BitTorrent), or a tree (explicitly configured CDN), to use an optimistic DAG. We'll see if it gets any traction.

sulandor 9 hours ago | parent | prev [-]

bittorrent will make best use of what bandwidth is available. better think of it as a dynamic cdn which can seamlessly incorporate static cdn-nodes (see webseed).

it could surely be made to care for topology but imho handing that problem to congestion control and routing mechanisms in lower levels works good enough and should not be a problem.

dotwaffle 8 hours ago | parent [-]

> bittorrent will make best use of what bandwidth is available.

At the expense of other traffic. Do this experiment: find something large-ish to download over HTTP, perhaps an ISO or similar from Debian or FreeBSD. See what the speed is like, and try looking at a few websites.

Now have a large torrent active at the same time, and see how slow the HTTP download drops to, and how much slower the web is. Perhaps try a Twitch stream or YouTube video, and see how the quality suffers greatly and/or starts rebuffering.

Your HTTP download uses a single TCP connection, most websites will just use a single connection also (perhaps a few short-duration extra connections for js libraries on different domains etc). By comparison, BitTorrent will have dozens if not hundreds of connections open and so instead of sharing that connection in half (roughly) it is monopolising 95%+ of your connection.

The other main issue I forgot to mention is that on most cloud providers, downloading from the internet is free, uploading to the internet costs a lot... So not many on public cloud are going to want to start seeding torrents!

crtasm 3 hours ago | parent | next [-]

If your torrent client is having a negative effect on other traffic then use its bandwidth limiter.

You can also lower how many connections it makes, but I don't know anyone that's had need to change that. Could you show us which client defaults to connecting to hundreds of peers?

dotwaffle 2 hours ago | parent [-]

My example was to show locally what happens -- the ISP does not have control over how many connections you make. I'm saying that if you have X TCP connections for HTTP and 100X TCP connections for BitTorrent, the HTTP connections will be drowned out. Therefore, when the link at your ISP becomes congested, HTTP will be disproportionately affected.

For the second question, read the section on choking at https://deluge-torrent.org/userguide/bandwidthtweaking/ and Deluge appears to set the maximum number of connections per torrent of 120 with a global max of 250 (though I've seen 500+ in my brief searching, mostly for Transmission and other clients).

I'll admit a lot of my BitTorrent knowledge is dated (having last used it ~15 years ago) but the point remains: ISPs are built for "North-South" traffic, that is: To/From the customer and the networks with the content, not between customers, and certainly not between customers of differing ISPs.

sneak 6 hours ago | parent | prev [-]

The number of connections isn’t relevant. A single connection can cause the same problem with enough traffic. Your bandwidth is not allocated on a per-connection basis.

dotwaffle 4 hours ago | parent [-]

If you download 2 separate files over HTTP, you'd expect each to get roughly 1/2 of the available bandwidth at the bottleneck.

With 1 HTTP connection downloading a file and 100 BitTorrent connections trying to download a file, all trying to compete, you'll find the HTTP throughput significantly reduced. It's how congestion control algorithms are designed: rough fairness per connection. That's why the first edition of BBR that Google released was unpopular, it stomped on other traffic.