Because none of our most advanced llm models need more than a few h100s or whatever. They would be better building exascale computing not centred on llms. It's blatantly promising undeliverable results.