Remix.run Logo
jmyeet 2 hours ago

The benefits are twofold: physical colocation and bandwidth.

Thunderbolt 5 offers 80Gbps of bidirectional bandwidth. PCIe 5.0 16x offers 1024Gbps of bidirectional bandwidth. This matters.

TB5 cables can only get so long whereas fiber can go much farther more easily. This means that in a data center type environment, you could virtualize your GPUs and attach them as necessary, putting them in a separate bank (probably on the same rack).

dcrazy an hour ago | parent | next [-]

Active optical (yes!) Thunderbolt cables can be much longer. After all, optical fiber was the original medium for Thunderbolt, back when it was still called Light Peak.

I couldn’t find any optical TB5 cables, but here’s a 4.5m TB4 one: https://www.owc.com/blog/the-new-superlong-40gb-s-owc-active...

And if TB3 is enough, Corning makes them in lengths up to 50m: https://www.corning.com/microsites/coc/oem/documents/ocbc/OE...

As for bandwidth, the medium transition seems to actually limit the author’s capabilities by losing some of the more advanced link-training features that are necessary for the highest-bandwidth PCIe 3 connections, never mind PCIe 5.

zamadatix 43 minutes ago | parent | next [-]

Hundreds of meters is considered short range in the world of *SFP. If you just plan on putting the GPUs in the same rack then I'm not sure it really matters, but you can really put anything anywhere in your DC and have things zoned with *SFP.

I don't think there is any reason TB couldn't do the same, beyond it would be even more niche to want non-modular/patchable cables+transceivers at those lengths (especially since fiber is often bundled dozens/hundreds of strands over a single trunk cable between racks).

dmitrygr 42 minutes ago | parent | prev [-]

For the curious, that 50m cable is $500 MSRP. https://1sourcevideo.com/shop/corning-50-meter-thunderbolt-3...

mikepurvis 2 hours ago | parent | prev | next [-]

"same rack" should still be fine for 1m passive TB5 cable though, right?

consp 2 hours ago | parent | prev [-]

> 1024Gbps

Good luck getting a 1Tbit tranceiver. Anydirectional. Also it's 512Gbitish per direction.

za_creature an hour ago | parent | next [-]

The video is about a 2x1 link, which the author hopes to eventually scale up to 3x4 using 40 gig transceivers. I'd say thunderbolt is probably safe in the near future.

throwaway270925 an hour ago | parent | prev | next [-]

Easy, fs.com has 1.6Tbps OSFP for about 570€ - though only up to 1m lenght apparently.

jmyeet an hour ago | parent [-]

I was looking into the highest bandwidth optical transceivers. 400Gbps were easy enough to find so thanks for posting this. I honestly didn't know there were 1.6Tbps transceivers like this.

One note: I believe the SMF max fiber length is 2km not 1m [1]. The data sheet [2] also says:

> - 2000m max on single mode fiber

[1]: https://www.vitextech.com/products/1-6t-osfp-2fr4

[2]: https://resource.fs.com/mall/resource/cn_osfp-2fr4-16t-data-...

jauntywundrkind an hour ago | parent | prev | next [-]

That's 64Gb per lane across x16 lanes. That sounds not daunting?

There's already 800Gb transceivers readily available, 1.6 is probably getting preview deploys to some hyperscalers & other early adopters as we speak.

jmyeet 2 hours ago | parent | prev [-]

Bidirectional is a lot like biweekly. Biweekly depending on context means twice a week or once every two weeks and bidirectional can both mean per direction and total of both directions.

But yes I meant 512Gbps each way, to be clear.

fc417fc802 an hour ago | parent [-]

I'm only a single datapoint but I've never encountered that usage. My understanding of a bidirectional link is that it meets the same spec in both directions simultaneously. It's important precisely because many links aren't bidirectional, sharing a single physical link between two logical links.

dcrazy an hour ago | parent [-]

The more precise terms are full-duplex and half-duplex.