| ▲ | fennecfoxy 3 hours ago | ||||||||||||||||
I mean sure, but in terms of cost per dollar/per watt of inference Nvidia's GPUs are pretty up there - unless China is pumping out domestic chips cheaply enough. Also with Nvidia you get the efficiency of everything (including inference) built on/for Cuda, even efforts to catch AMD up are still ongoing afaik. I wouldn't be surprised if things like DS were trained and now hosted on Nvidia hardware. | |||||||||||||||||
| ▲ | re-thc 2 hours ago | parent [-] | ||||||||||||||||
> unless China is pumping out domestic chips cheaply enough They are. Nvidia makes A LOT of profit. Hey, top stock for a reason. > I wouldn't be surprised if things like DS were trained and now hosted on Nvidia hardware DS is "old". I wouldn't study them. The new 1s have a mandate to at least run on local hardware. There are data center requirements. I agree it could still be trained on Nvidia GPUs (black market etc), but not running. | |||||||||||||||||
| |||||||||||||||||