| ▲ | studmuffin650 a day ago | ||||||||||||||||
Also important to remember that Google is years ahead of most other AI shops in that they're running on custom silicon. This makes their inference (and maybe training) cheaper then almost any other company. People don't realize this when compared to OpenAI/Anthropic where most folks are utilizing NVIDIA GPUs, Google is completely different in that aspect with their custom TPU platform. | |||||||||||||||||
| ▲ | xnx a day ago | parent | next [-] | ||||||||||||||||
> Also important to remember that Google is years ahead of most other AI shops in that they're running on custom silicon. Not just the chips, Google's entire datacenter setup seems much more mature (e.g. liquid cooling, networking, etc.). I saw some video of new Amazon datacenter (https://www.youtube.com/watch?v=vnGC4YS36gU) and it looks like a bunch of server racks in a warehouse. | |||||||||||||||||
| |||||||||||||||||
| ▲ | cma a day ago | parent | prev [-] | ||||||||||||||||
Anthropic uses TPUs as well as nvidia. Compiler bugs in the tooling around the platform caused most of their quality issues and customer churn this year, but I think they've since announced a big expansion in use: https://www.anthropic.com/engineering/a-postmortem-of-three-... | |||||||||||||||||