▲ | lbhdc 4 days ago | |
If you are willing to spread your workload out over a few regions getting that many GPUs on demand can be doable. You can use something like compute classes on gcp to fallback to different machine types if you do hit stockouts. That doesn't make you impervious from stock outs, but makes it a lot more resilient. You can also use duty cycle metrics to scale down your gpu workloads to get rid of some of the slack. |