| ▲ | written-beyond 5 hours ago | |
I thought these TPUs were primarily used for inference? | ||
| ▲ | vlovich123 4 hours ago | parent | next [-] | |
TPU8t is for training. But even still, once you’ve trained, you need to run the model too. And these kinds of models already have a huge latency hit so there’s not much hurting running it away from the trading switches. | ||
| ▲ | knowaveragejoe 4 hours ago | parent | prev [-] | |
As the article states, there's both training and inference dedicated chips. | ||