Remix.run Logo
tomrod 6 hours ago

Lot of retired fab folks in the Austin area if you needed to spin up a local fab. It's really not a dark art, there are plenty of folks that have experience in the industry.

Workaccount2 5 hours ago | parent | next [-]

This is sort of like saying there are lots of kids in the local community college shop class if you want to spin up an F1 team.

The knowledge of making 2008 era chips is not a gating factor for getting a handful of atoms to function as a transistor in current SOTA chips. There are probably 100 people on earth who know how to do this, and the majority of them are in Taiwan.

Again, China has literally stolen the plans for EUV lithography, years ago, and still cannot get it to work. Even Samsung and Intel, using the same machines as TSMC, cannot match what they are doing.

It's a dark art in the most literal sense.

Nevermind that new these cutting edge fabs cost ~$50 Billion each.

checker659 5 hours ago | parent [-]

I've always wondered. If you have fuck you money, wouldn't it be possible to build GPUs to do LLM matmul with 2008 technology. Again, assuming energy costs / cooling costs don't matter.

pixl97 5 hours ago | parent | next [-]

Building the clean rooms at this scale is a limitation in itself. Just getting the factory setup to and the machines put in so they don't generate particulate matter in operation is an art that compares in difficulty to making the chips themselves.

Zigurd 5 hours ago | parent | prev | next [-]

Energy, cooling, and how much of the building you're taking up do matter. They matter less and in a more manageable way for hyperscalers that have a long established resource management practice in lots of big data centers because they can phase in new technologies as they phase out the old. But it's a lot more daunting to think about building a data center big enough to compete with one full of Blackwell systems there are more than 10 times more performant per watt and per square foot.

Workaccount2 4 hours ago | parent | prev [-]

IIRC people have gotten LLMs to run on '80s hardware. Inference isn't overly compute heavy.

The killer really is training, which is insanely compute intensive and really only recently hardware practical on the scale needed.

adgjlsfhk1 25 minutes ago | parent [-]

you could probably train a gpt 2 sized model with sota architecture on a 2008 supercomputer. it would take a while though.

Zigurd 5 hours ago | parent | prev [-]

The mask shops at TSMC and Samsung kind of are a dark art. It's one of the interesting things about the contract manufacturing business in chips. It's not just a matter of having access to state of the art equipment.