| ▲ | fooblaster 12 hours ago | ||||||||||||||||
If you think they are going to catch up with Google's software and hardware ecosystem on their first chip, you may be underestimating how hard this is. Google is on TPU v7. meta has already tried with MTIA v1 and v2. those haven't been deployed at scale for inference. | |||||||||||||||||
| ▲ | nateb2022 11 hours ago | parent [-] | ||||||||||||||||
I don't think many of them will want to, though. I think as long as Nvidia/AMD/other hardware providers offer inference hardware at prices decent enough to not justify building a chip in-house, most companies won't. Some of them will probably experiment, although that will look more like a small team of researchers + a moderate budget rather than a burn-the-ships we're going to use only our own hardware approach. | |||||||||||||||||
| |||||||||||||||||