Remix.run Logo
Touching the Elephant – TPUs(considerthebulldog.com)
104 points by giuliomagnifico 8 hours ago | 33 comments
Zigurd 6 hours ago | parent | next [-]

The extent to which TPU architecture is built for the purpose also doesn't happen in a single design generation. Ironwood is the seventh generation of TPU, and that matters a lot.

desideratum 4 hours ago | parent | prev | next [-]

The Scaling ML textbook also has an excellent section on TPUs. https://jax-ml.github.io/scaling-book/tpus/

jauntywundrkind 2 hours ago | parent [-]

I also enjoyed https://henryhmko.github.io/posts/tpu/tpu.html https://news.ycombinator.com/item?id=44342977 .

The work that XLA & schedulers are doing here is wildly impressive.

This feels so much drastically harder to work with than Itanium must have been. ~400bit VLIW, across extremely diverse execution units. The workload is different, it's not general purpose, but still awe inspiring to know not just that they built the chip but that the software folks can actually use such a wildly weird beast.

I wish we saw more industry uptake for XLA. Uptakes not bad, per-se: there's a bunch of different hardware it can target! But what amazing secret sauce, it's open source, and it doesn't feel like there's the industry rally behind it it deserves. It feels like Nvidia is only barely beginning to catch up, to dig a new moat, with the just announced Nvidia Tiles. Such huge overlap. Afaik, please correct if wrong, but XLA isn't at present particularly useful at scheduling across machines, is it? https://github.com/openxla/xla

alevskaya an hour ago | parent | next [-]

I do think it's a lot simpler than the problem Itanium was trying to solve. Neural nets are just way more regular in nature, even with block sparsity, compared to generic consumer pointer-hopping code. I wouldn't call it "easy", but we've found that writing performant NN kernels for a VLIW architecture chip is in practice a lot more straightforward than other architectures.

JAX/XLA does offer some really nice tools for doing automated sharding of models across devices, but for really large performance-optimized models we often handle the comms stuff manually, similar in spirit to MPI.

desideratum 2 hours ago | parent | prev | next [-]

Thanks for sharing this. I agree w.r.t. XLA. I've been moving to JAX after many years of using torch and XLA is kind of magic. I think torch.compile has quite a lot of catching up to do.

> XLA isn't at present particularly useful at scheduling across machines,

I'm not sure if you mean compiler-based distributed optimizations, but JAX does this with XLA: https://docs.jax.dev/en/latest/notebooks/Distributed_arrays_...

cpgxiii an hour ago | parent | prev [-]

In Itanium's heyday, the compilers and libraries were pretty good at handling HPC workloads, which is really the closest anyone was running then to modern NN training/inference. The problem with Itanium and its compilers was that people obviously wanted to run workloads that looked nothing like HPC (databases, web servers, etc) and the architecture and compilers weren't very good at that. There have always been very successful VLIW-style architectures in more specialized domains (graphics, HPC, DSP, now NPU) it just hasn't worked out well for general-purpose processors.

Simplita 7 hours ago | parent | prev | next [-]

This was a nice breakdown. I always feel most TPU articles skip over the practical parts. This one actually connects the concepts in a way that clicks.

ddtaylor 3 hours ago | parent | prev | next [-]

Are TPUs still stuck to their weird Google bucket thing when using GCP? I hated that.

alecco 5 hours ago | parent | prev [-]

I'm surprised the perspective of China making TPUs at scale in a couple of years is not bigger news. It could be a deadly blow for Google, NVIDIA, and the rest. Combine it with China's nuclear base and labor pool. And the cherry on top, America will train 600k Chinese students as Trump agreed to.

The TPUv4 and TPUv6 docs were stolen by a Chinese national in 2022/2023: https://www.cyberhaven.com/blog/lessons-learned-from-the-goo... https://www.justice.gov/opa/pr/superseding-indictment-charge...

And that's just 1 guy that got caught. Who knows how many other cases were there.

A Chinese startup is already making clusters of TPUs and has revenue https://www.scmp.com/tech/tech-war/article/3334244/ai-start-...

Workaccount2 5 hours ago | parent | next [-]

Manufacturing is the hard part. China certainly has the knowledge to build a TPU architecture without needing to steal the plans. What they don't have is the ability to actually build the chips. This is even in spite of also stealing lithography plans.

There is a dark art to semiconductor manufacturing that pretty much only TSMC really has the wizards for. Maybe intel and samsung a bit too.

mr_toad 26 minutes ago | parent | next [-]

> What they don't have is the ability to actually build the chips.

China has fabs. Most are older nodes and are used to manufacture chips used in cars and consumer electronics. They have companies that design chips (manufactured by TSMC), like the Ascend 910, which are purpose built for AI. They may be behind, but they’re not standing still.

radialstub 2 hours ago | parent | prev | next [-]

The software is the hard part. Western software still outclasses what the chinese produce by a good amount.

PunchyHamster 22 minutes ago | parent [-]

This. The amount of investment into CUDA is high enough most companies won't even consider competition, even if it was lower cost.

We desperately need more open frameworks for competition to work

aunty_helen 4 hours ago | parent | prev | next [-]

For China there is no plan B for semiconductor manufacturing. Invading Taiwan would be a dice roll and the consequences would be severe. They will create their own SOTA semiconductor industry. Same goes for their military.

The question is when? Does that come in time to deflate the US tech stock bubble? Or will the bubble start to level out and reality catch up, or will the market crash for another reason beforehand?

snek_case 3 hours ago | parent [-]

China has their own fabs. They are behind TSMC in terms of technology, but that doesn't mean they don't have fabs. They're currently ~7nm AFAIK. That's behind TSMC, but also not useless. They are obviously trying hard to catch up. I don't think we should just imagine that they never will. China has a lot of smart engineers and they know how strategically important chip manufacturing is.

This is like this funny idea people had in the early 2000s that China would continue to manufacture most US technology but they could never design their own competitive tech. Why would anyone think that?

Wrt invading Taiwan, I don't think there is any way China can get TSMC intact. If they do invade Taiwan (please God no), it would be a horrible bloodbath. Deaths in the hundreds of thousands and probably relentless bombing. Taiwan would likely destroy its own fabs to avoid them being taken. It would be sad and horrible.

mr_toad 22 minutes ago | parent | next [-]

> Wrt invading Taiwan, I don't think there is any way China can get TSMC intact.

There are so many trade and manufacturing links between China and Taiwan that an outright war would be economically disastrous for both countries.

renewiltord 2 hours ago | parent | prev [-]

If they invade Taiwan, we will scuttle the plants and direct ASML to disable their machines which they will do because that’s the condition under which we gave them the tech. They’re not going to get it this way.

They’ll just catch the next wave of tech or eventually break into EUV.

tomrod 5 hours ago | parent | prev [-]

Lot of retired fab folks in the Austin area if you needed to spin up a local fab. It's really not a dark art, there are plenty of folks that have experience in the industry.

Workaccount2 4 hours ago | parent | next [-]

This is sort of like saying there are lots of kids in the local community college shop class if you want to spin up an F1 team.

The knowledge of making 2008 era chips is not a gating factor for getting a handful of atoms to function as a transistor in current SOTA chips. There are probably 100 people on earth who know how to do this, and the majority of them are in Taiwan.

Again, China has literally stolen the plans for EUV lithography, years ago, and still cannot get it to work. Even Samsung and Intel, using the same machines as TSMC, cannot match what they are doing.

It's a dark art in the most literal sense.

Nevermind that new these cutting edge fabs cost ~$50 Billion each.

checker659 4 hours ago | parent [-]

I've always wondered. If you have fuck you money, wouldn't it be possible to build GPUs to do LLM matmul with 2008 technology. Again, assuming energy costs / cooling costs don't matter.

pixl97 4 hours ago | parent | next [-]

Building the clean rooms at this scale is a limitation in itself. Just getting the factory setup to and the machines put in so they don't generate particulate matter in operation is an art that compares in difficulty to making the chips themselves.

Zigurd 4 hours ago | parent | prev | next [-]

Energy, cooling, and how much of the building you're taking up do matter. They matter less and in a more manageable way for hyperscalers that have a long established resource management practice in lots of big data centers because they can phase in new technologies as they phase out the old. But it's a lot more daunting to think about building a data center big enough to compete with one full of Blackwell systems there are more than 10 times more performant per watt and per square foot.

Workaccount2 3 hours ago | parent | prev [-]

IIRC people have gotten LLMs to run on '80s hardware. Inference isn't overly compute heavy.

The killer really is training, which is insanely compute intensive and really only recently hardware practical on the scale needed.

Zigurd 4 hours ago | parent | prev [-]

The mask shops at TSMC and Samsung kind of are a dark art. It's one of the interesting things about the contract manufacturing business in chips. It's not just a matter of having access to state of the art equipment.

lukasb 20 minutes ago | parent | prev | next [-]

Yeah I'm terrified that TPUs will get cheaper, that would be awful.

llm_nerd 21 minutes ago | parent | prev | next [-]

>It could be a deadly blow for Google, NVIDIA, and the rest.

How would this be a deadly blow to Google? Google makes TPUs for their own services and products, avoiding paying the expensive nvidia tax. If other people make similar products, this has effectively zero impact on Google.

nvidia knew their days were numbered, at least in their ownership of the whole market. And China hardly had to steal the great plans for a TPU to make one, and a FMA/MAC unit is actually a surprisingly simple bit of hardware to design. Everyone is adding "TPUs" in their chips - Apple, Qualcomm, Google, AMD, Amazon, Huawei, nvidia (that's what tensor cores are) and everyone else.

And that startup isn't the big secret. Huawei already has solutions matching the H20. Once the specific need that can be serviced by an ASIC is clear, everyone starts building it.

>America will train 600k Chinese students as Trump agreed to

What great advantage do you think this is?

America isn't remotely the great gatekeeper on this. If anything, Taiwan + the Netherlands (ASML) are. China would yield infinitely more value in learning manufacturing and fabrication secrets than cloning some specific ASIC.

fullofideas 4 hours ago | parent | prev [-]

>Combine it with China's nuclear base and labor pool. And the cherry on top, America will train 600k Chinese students as Trump agreed to.

I dont understand this part. What has nuclear base got to do with chip manufacturing? And surely, not all 600k students are learning chip design or stealing plans

dylanowen 4 hours ago | parent | next [-]

I assume the nuclear reactors are to power the data centers using the new chips. There have been a few mentions on HN about the US being very behind in building enough power plants to run LLM workloads

mr_toad 17 minutes ago | parent | next [-]

The frenetic pace of data center construction in the US means that nuclear is not a short-term option. No way are they going to wait a decade or more for generation to come on line. It’s going to be solar, batteries, and gas (turbines, and possibly fuel cells).

renewiltord 2 hours ago | parent | prev [-]

We should ask ourselves: is it worth ruining local communities in order to beat China in the global sphere?

alecco 4 hours ago | parent | prev | next [-]

I mean they have the power grid to run TPUs at 10x the scale of USA.

About students, have you seen the microelectronic labs in American universities lately? A huge chunk are Chinese already. Same with some of the top AI labs.

pixl97 4 hours ago | parent | prev | next [-]

Nuclear power is what they are talking about, not weapons.

tormeh 4 hours ago | parent | prev [-]

Thankfully LLMs are a dead end, so nobody will make it to AGI by just throwing more electricity at the problem. Now if we could only have a new AI winter we could postpone the end of mankind as the dominant species on earth by another couple of decades.