| ▲ | IceHegel 9 hours ago | |
I'm surprised more people are not talking about the fact that the two best models in the world, Gemini 3 and Claude 4.5 Opus, were both trained on Google TPU clusters. Presumably, inference can be done on TPUs, Nvidia chips, in Anthropic's case, new stuff like Trainium. | ||
| ▲ | ggiigg 5 hours ago | parent [-] | |
Google is a direct competitor to many LARGE buyers of GPUs and therefore a non starter from a business perspective. In addition, many companies cannot single source due to risk considerations. Hardware is different because buying GPUs is a capital investment. You own the asset and revisit the supplier only at the next refresh cycle, not continuously as with rented compute. | ||