Remix.run Logo
skirmish 4 hours ago

I can assure you that most internal ML teams are using TPUs both for training and inference, they are just so much easier to get. Whatever GPUs exist are either reserved for Google Cloud customers, or loaned temporarily to researchers who want to publish easily externally reproducible results.